Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-r6qrq Total loading time: 0 Render date: 2024-04-30T02:02:01.778Z Has data issue: false hasContentIssue false

Part II - The Building Blocks of a Study

Published online by Cambridge University Press:  25 May 2023

Austin Lee Nichols
Affiliation:
Central European University, Vienna
John Edlund
Affiliation:
Rochester Institute of Technology, New York
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2023

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

References

American Association for Public Opinion Research (2016). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys, 9th ed. American Association for Public Opinion Research.Google Scholar
Andreadis, I. (2020). Text message (SMS) pre-notifications, invitations and reminders for web surveys. Survey Methods: Insights from the Field, Special Issue: Advancements in Online and Mobile Survey Methods. https://doi.org/10.11587/DX8NNEGoogle Scholar
Anseel, F., Lievens, F., Schollaert, E., & Choragwicka, B. (2010). Response rates in organizational science, 1995–2008: A meta-analytic review and guidelines for survey researchers. Journal of Business and Psychology, 25(3), 335349. https://doi.org/10.1007/s10869-010-9157-6CrossRefGoogle Scholar
Antoun, C., Zhang, C., Conrad, F. G., & Schober, M. F. (2016). Comparisons of online recruitment strategies for convenience samples: Craigslist, Google AdWords, Facebook, and Amazon Mechanical Turk. Field Methods, 28(3), 231246. https://doi.org/10.1177/1525822X15603149CrossRefGoogle Scholar
Arthur, W., Jr., Hagen, E., & George, F., Jr. (2021). The lazy or dishonest respondent: Detection and prevention. Annual Review of Organizational Psychology and Organizational Behavior, 8, 105137. https://doi.org/10.1146/annurev-orgpsych-012420-055324CrossRefGoogle Scholar
Bethlehem, J. (2009). Applied Survey Methods. A Statistical Perspective. John Wiley & Sons.Google Scholar
Boas, T. C., Christenson, D. P., & Glick, D. M. (2020). Recruiting large online samples in the United States and India: Facebook, Mechanical Turk, and Qualtrics. Political Science Research and Methods, 8(2), 232250. https://doi.org/10.1017/psrm.2018.28Google Scholar
Bosnjak, M., Tuten, T. L., & Wittmann, W. W. (2005). Unit (non) response in web‐based access panel surveys: An extended planned‐behavior approach. Psychology & Marketing, 22(6), 489505. https://doi.org/10.1002/mar.20070Google Scholar
Brenner, P. S., Cosenza, C., & Fowler, F. J., Jr. (2020). Which subject lines and messages improve response to e-mail invitations to web surveys? Field Methods, 32(4), 365382. https://doi.org/10.1177/1525822X20929647CrossRefGoogle Scholar
Callegaro, M., Villar, A., Yeager, D., & Krosnick, J. A. (2014). A critical review of studies investigating the quality of data obtained with online panels based on probability and nonprobability samples. In Callegaro, M., Baker, R., Bethlehem, J., et al. (eds.), Online Panel Research: A Data Quality Perspective (pp. 23–53). John Wiley & Sons.Google Scholar
Cantrell, J., Bennett, M., Thomas, R. K., et al. (2018). It’s getting late: Improving completion rates in a hard-to-reach sample. Survey Practice, 11(2). https://doi.org/10.29115/SP-2018-0019Google Scholar
Chandler, J. J. & Paolacci, G. (2017). Lie for a dime: When most prescreening responses are honest but most study participants are impostors. Social Psychological and Personality Science, 8(5), 500508. https://doi.org/10.1177/1948550617698203Google Scholar
Conn, K. M., Mo, C. H., & Sellers, L. M. (2019). When less is more in boosting survey response rates. Social Science Quarterly, 100(4), 14451458. https://doi.org/10.1111/ssqu.12625Google Scholar
Coopersmith, J., Vogel, L. K., Bruursema, T., & Feeney, K. (2016). Effects of incentive amount and type of web survey response rates. Survey Practice, 9(1), 110.Google Scholar
Coppock, A. (2019). Generalizing from survey experiments conducted on Mechanical Turk: A replication approach. Political Science Research and Methods, 7(3), 613628. https://doi.org/10.1017/psrm.2018.10CrossRefGoogle Scholar
Curran, P. G. (2016). Methods for the detection of carelessly invalid responses in survey data. Journal of Experimental Social Psychology, 66, 419. https://doi.org/10.1016/j.jesp.2015.07.006Google Scholar
Daikeler, J., Bošnjak, M., & Lozar Manfreda, K. (2020). Web versus other survey modes: An updated and extended meta-analysis comparing response rates. Journal of Survey Statistics and Methodology, 8(3), 513539. https://doi.org/10.1093/jssam/smz008CrossRefGoogle Scholar
Daniel, J. (2011). Sampling Essentials: Practical Guidelines for Making Sampling Choices. SAGE Publications.Google Scholar
De Bruijne, M. & Wijnant, A. (2014). Improving response rates and questionnaire design for mobile web surveys. Public Opinion Quarterly, 78(4), 951962. https://doi.org/10.1093/poq/nfu046Google Scholar
de Leeuw, E. D., Callegaro, M., Hox, J., Korendijk, E., & Lensvelt-Mulders, G. (2007). The influence of advance letters on response in telephone surveys. Public Opinion Quarterly, 71(3), 413443. https://doi.org/10.1093/poq/nfm014Google Scholar
Dennis, M. L. (1991). Changing the conventional rules: Surveying homeless people in nonconventional locations. Housing Policy Debate, 2(3), 699732. https://doi.org/10.1080/10511482.1991.9521070Google Scholar
Desilver, D. (2013). Chart of the week: Americans on the move. Pew Research Center, November 22. Available at: www.pewresearch.org/fact-tank/2013/11/22/chart-of-the-week-americans-on-the-move/.Google Scholar
Devine, E. G., Waters, M. E., Putnam, M., et al. (2013). Concealment and fabrication by experienced research subjects. Clinical Trials, 10, 935948. https://doi.org/10.1177/1740774513492917Google Scholar
Dillman, D. A. (1978). Mail and Telephone Surveys: The Total Design Method. John Wiley & Sons.Google Scholar
Dillman, D. & Edwards, M. (2016). Designing a mixed mode survey. In Wolf, C., Joye, D., Smith, T., & Fu, Y.-C. (eds.), SAGE Handbook of Survey Methodology (pp. 255268). SAGE Publications.Google Scholar
Dillman, D. (2017). The promise and challenge of pushing respondents to the Web in mixed-mode surveys. Survey Methodology, Statistics Canada, Catalogue No. 12-001-X, Vol. 43, No. 1. Available at: www.statcan.gc.ca/pub/12-001-x/2017001/article/14836-eng.htm.Google Scholar
Edwards, P. J., Roberts, I., & Clarke, M. J., et al. (2009). Methods to increase response to postal and electronic questionnaires. Cochrane Database of Systematic Reviews, MR000008. https://doi.org/10.1002/14651858.MR000008.pub4Google Scholar
Etikan, I., Musa, S. A., & Alkassim, R. S. (2016). Comparison of convenience sampling and purposive sampling. American Journal of Theoretical and Applied Statistics, 5(1), 14. https://doi.org/ 10.11648/j.ajtas.20160501.11Google Scholar
European Society for Opinion and Marketing Research (2012). 28 Questions to help buyers of online samples. Available at: https://swiss-insights.ch/wp-content/uploads/2020/05/ESOMAR-28-Questions-to-Help-Buyers-of-Online-Samples-September-2012.pdf.Google Scholar
Eyrich-Garg, K. M. & Moss, S. L. (2017). How feasible is multiple time point web-based data collection with individuals experiencing street homelessness? Journal of Urban Health, 94(1), 6474. https://doi.org/10.1007/s11524-016-0109-yCrossRefGoogle ScholarPubMed
Gellar, J., Hughes, S., Delannoy, C., et al. (2020). Multilevel Regression with Poststratification for the Analysis of SMS Survey Data (No. c71d456bbf9f4026988e1a8107df4764). Mathematica Policy Research.Google Scholar
Göritz, A. S. (2006). Incentives in web studies: Methodological issues and a review. International Journal of Internet Science, 1(1), 5870.Google Scholar
Göritz, A. S. (2014). Determinants of the starting rate and the completion rate in online panel studies. Online Panel Research: Data Quality Perspective, A, 154170. https://doi.org/10.1002/9781118763520.ch7Google Scholar
Goyder, J. (2019). The Silent Minority: Non-Respondents in Sample Surveys. Routledge.CrossRefGoogle Scholar
Griggs, A. K., Smith, A. C., Berzofsky, M. E., et al. (2021). Examining the impact of a survey’s email timing on response latency, mobile response rates, and breakoff rates. Field Methods, March 30. https://doi.org/10.1177/1525822X21999160CrossRefGoogle Scholar
Groves, R. M., Cialdini, R. B., & Couper, M. P. (1992). Understanding the decision to participate in a survey. Public Opinion Quarterly, 56(4), 475495. https://doi.org/10.1086/269338Google Scholar
Groves, R.M. & Heeringa, S. G. (2006). Responsive design for household surveys: Tools for actively controlling survey errors and costs. Journal of the Royal Statistical Society: Series A, 169(3): 439457. https://doi.org/10.1111/j.1467-985X.2006.00423.xGoogle Scholar
Groves, R. M. & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias: a meta-analysis. Public Opinion Quarterly, 72(2), 167189. https://doi.org/10.1093/poq/nfn011Google Scholar
Groves, R. M., Singer, E., & Corning, A. (2000). Leverage–saliency theory of survey participation: Description and an illustration. Public Opinion Quarterly, 64(3), 299308. https://www.jstor.org/stable/3078721Google Scholar
Haas, G. C., Trappmann, M., Keusch, F., Bähr, S., & Kreuter, F. (2020). Using geofences to collect survey data: Lessons learned from the IAB-SMART study. Survey Methods: Insights from the Field, December 10. https://doi.org/10.13094/SMIF-2020-00023CrossRefGoogle Scholar
Hall, E. A., Zuniga, R., Cartier, J., et al. (2003). Staying in Touch: A Fieldwork Manual of Tracking Procedures for Locating Substance Abusers in Follow-Up Studies, 2nd ed. UCLA Integrated Substance Abuse ProgramsGoogle Scholar
Heckathorn, D. D. & Cameron, C. J. (2017). Network sampling: From snowball and multiplicity to respondent-driven sampling. Annual Review of Sociology, 43, 101119. https://doi.org/10.1146/annurev-soc-060116-053556Google Scholar
Jia, P., Furuya-Kanamori, L., Qin, Z. S., Jia, P. Y., & Xu, C. (2021). Association between response rates and monetary incentives in sample study: A systematic review and meta-analysis. Postgraduate Medical Journal, 97(1150), 501510. https://dx.doi.org/10.1136/postgradmedj-2020-137868Google Scholar
Lawes, M., Hetschko, C., Sakshaug, J. W., & Grießemer, S. (2021). Contact modes and participation in app-based smartphone surveys: Evidence from a large-scale experiment. Social Science Computer Review, March 11. https://doi.org/10.1177/0894439321993832Google Scholar
Levine, B., Krotki, K., & Lavrakas, P. J. (2019). Redirected inbound call sampling (RICS) telephone surveying via a new survey sampling paradigm. Public Opinion Quarterly, 83(2), 386411. https://doi.org/10.1093/poq/nfz024Google Scholar
Lindeman, N. (2019) What is the average survey response rate? Available at: https://surveyanyplace.com/average-survey-response-rate/.Google Scholar
Lindsay, J. (2005). Getting the numbers: The unacknowledged work in recruiting for survey research. Field Methods, 17(1), 119128. https://doi.org/10.1177/1525822X04271028Google Scholar
Liu, M. & Inchausti, N. (2017). Improving survey response rates: The effect of embedded questions in web survey email Invitations. Survey Practice, 10(1), 16. https://doi.org/10.29115/SP-2017-0005Google Scholar
Liu, M. & Wronski, L. (2018). Examining completion rates in web surveys via over 25,000 real-world surveys. Social Science Computer Review, 36(1), 116124. https://doi.org/10.1177/0894439317695581Google Scholar
MacDonald, S. (2021). The science behind email open rates (and how to get more people to read your emails). Available at: www.superoffice.com/blog/email-open-rates/.Google Scholar
Marcus, B. & Schütz, A. (2005). Who are the people reluctant to participate in research? Personality correlates of four different types of nonresponse as inferred from self‐and observer ratings. Journal of Personality, 73(4), 959984. https://doi.org/10.1111/j.1467-6494.2005.00335.xGoogle Scholar
Marcus, B., Bosnjak, M., Lindner, S., Pilischenko, S., & Schütz, A. (2007). Compensating for low topic interest and long surveys: a field experiment on nonresponse in web surveys. Social Science Computer Review, 25(3), 372383. https://doi.org/10.1177/0894439307297606Google Scholar
Mavletova, A. & Couper, M. P. (2015). A meta-analysis of breakoff rates in mobile web surveys. In Mavletova, A. & Couper, M. P. (eds.), Mobile Research Methods: Opportunities and Challenges of Mobile Research Methodologies (pp. 8198). Available at: www.jstor.org/stable/j.ctv3t5r9n.11.Google Scholar
Mazzone, J. & Pickett, J. (2011). The Household Diary Study: Mail Use & Attitudes in FY 2010. The United States Postal Service.Google Scholar
McClean, C. (2020) Most Americans don’t answer cellphone calls from unknown numbers. Pew Research Center, December 14. Available at: www.pewresearch.org/fact-tank/2020/12/14/most-americans-dont-answer-cellphone-calls-from-unknown-numbers/.Google Scholar
Medway, R. L. & Fulton, J. (2012). When more gets you less: A meta-analysis of the effect of concurrent web options on mail survey response rates. Public Opinion Quarterly, 76(4), 733746. https://doi.org/10.1093/poq/nfs047Google Scholar
Meng, X. L. (2018). Statistical paradises and paradoxes in big data (I): Law of large populations, big data paradox, and the 2016 US presidential election. Annals of Applied Statistics, 12(2), 685726. https://10.1214/18-AOAS1161SFGoogle Scholar
Mercer, A., Caporaso, A., Cantor, D., & Townsend, R. (2015). How much gets you how much? Monetary incentives and response rates in household surveys. Public Opinion Quarterly, 79, 105129. https://doi.org/10.1093/poq/nfu059Google Scholar
Messer, B. L. & Dillman, D. A. (2011). Surveying the general public over the Internet using address-based sampling and mail contact procedures. Public Opinion Quarterly, 75, 429457. https://doi.org/10.1093/poq/nfr021.Google Scholar
Mook, D. G. (1983). In defense of external invalidity. American Psychologist, 38 (4), 379387. https://doi.org/10.1037/0003-066X.38.4.379CrossRefGoogle Scholar
Pasek, J. & Krosnick, J. A. (2010). Measuring intent to participate and participation in the 2010 census and their correlates and trends: Comparisons of RDD telephone and non-probability sample Internet survey data. Statistical Research Division of the US Census Bureau, 15, 2010.Google Scholar
Patton, M. Q. (2007). Sampling, qualitative (purposive). The Blackwell Encyclopedia of Sociology. John Wiley & Sons.Google Scholar
Petrovčič, A., Petrovčič, G., & Manfreda, K. L. (2016). The effect of email invitation elements on response rate in a web survey within an online community. Computers in Human Behavior, 56, 320329. https://doi.org/10.1016/j.chb.2015.11.025CrossRefGoogle Scholar
Porter, S. R. & Whitcomb, M. E. (2003). The impact of contact type on web survey response rates. Public Opinion Quarterly, 67, 579588.Google Scholar
Preacher, K. J., Rucker, D. D., MacCallum, R. C., & Nicewander, W. A. (2005). Use of the extreme groups approach: a critical reexamination and new recommendations. Psychological Methods, 10(2), 178192. https://doi.org/10.1037/1082-989X.10.2.178CrossRefGoogle ScholarPubMed
Rath, J. M., Williams, V. F., Villanti, A. C., et al. (2017). Boosting online response rates among nonresponders: a dose of funny. Social Science Computer Review, 35(5), 619632. https://doi.org/10.1177/0894439316656151Google Scholar
Revilla, M. (2017). Analyzing survey characteristics, participation, and evaluation across 186 surveys in an online opt-in panel in Spain. Methods, Data, Analyses, 11(2), 135162. https://doi.org/10.12758/mda.2017.02Google Scholar
Revilla, M. & Höhne, J. K. (2020). How long do respondents think online surveys should be? New evidence from two online panels in Germany. International Journal of Market Research, 62(5), 538545. https://doi.org/10.1177/1470785320943049Google Scholar
Reyes, G. (2020). Understanding nonresponse rates: Insights from 600,000 opinion surveys. The World Bank Economic Review, 34 (Supplement), S98S102. https://doi.org/10.1093/wber/lhz040Google Scholar
Rogelberg, S. G., Conway, J. M., Sederburg, M. E., et al. (2003). Profiling active and passive nonrespondents to an organizational survey. Journal of Applied Psychology, 88(6), 1104. https://doi.org/10.1037/0021-9010.88.6.1104Google Scholar
Sackett, P. R. & Yang, H. (2000). Correction for range restriction: An expanded typology. Journal of Applied Psychology, 85(1), 112118. https://doi.org/10.1037/0021-9010.85.1.112CrossRefGoogle ScholarPubMed
Sakshaug, J. W., Cernat, A., & Raghunathan, T. E. (2019). Do sequential mixed-mode surveys decrease nonresponse bias, measurement error bias, and total bias? An experimental study. Journal of Survey Statistics and Methodology, 7(4), 545571. https://doi.org/10.1093/jssam/smy024Google Scholar
Sánchez-Fernández, J., Muñoz-Leiva, F., & Montoro-Ríos, F. J. (2012). Improving retention rate and response quality in web-based surveys. Computers in Human Behavior, 28(2), 507514. https://doi.org/10.1016/j.chb.2011.10.023CrossRefGoogle Scholar
Sauermann, H. & Roach, M. (2013). Increasing web survey response rates in innovation research: An experimental study of static and dynamic contact design features. Research Policy, 42(1), 273286. https://doi.org/10.1016/j.respol.2012.05.003CrossRefGoogle Scholar
Schonlau, M. & Couper, M. P. (2017). Options for conducting web surveys. Statistical Science, 32(2), 279292. https://10.1214/16-STS597Google Scholar
Schouten, B., Peytchev, A., & Wagner, J. (2020). Adaptive Survey Design. Chapman and Hall/CRC Press.Google Scholar
Schumacher, S. & Kent, N. (2020). 8 charts on internet use around the world as countries grapple with COVID-19. Pew Research Center, April 2. Availabale at: www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/.Google Scholar
Shatz, I. (2017). Fast, free, and targeted: Reddit as a source for recruiting participants online. Social Science Computer Review, /35(4), 537549. https://doi.org/10.1177/0894439316650163Google Scholar
Shoemaker, P. J., Eichholz, M., & Skewes, E. A. (2002). Item nonresponse: Distinguishing between don‘t know and refuse. International Journal of Public Opinion Research, 14(2), 193201. https://doi.org/10.1093/ijpor/14.2.193Google Scholar
Simons, D. J., Shoda, Y., & Lindsay, D. S. (2017). Constraints on generality (COG): A proposed addition to all empirical papers. Perspectives on Psychological Science, 12(6), 11231128. https://doi.org/10.1177/1745691617708630Google Scholar
Singer, E. & Ye, C. (2013). The use and effects of incentives in surveys. The ANNALS of the American Academy of Political and Social Science, 645(1), 112141. https://doi.org/10.1177/0002716212458082Google Scholar
Snowberg, E. & Yariv, L. (2021). Testing the waters: Behavior across participant pools. American Economic Review, 111(2), 687719. https://10.1257/aer.20181065Google Scholar
Springer, V., Martini, P., Lindsey, S., & Vezich, I. (2016). Practice based considerations for using multi-stage survey design to reach special populations on Amazon’s Mechanical Turk. Survey Practice, 9(5), 18. https://doi.org/10.29115/SP-2016-0029Google Scholar
Tourangeau, R. (2018). Choosing a mode of survey data collection. In Vannette, D. & Krosnick, J. (eds.), The Palgrave Handbook of Survey Research. Palgrave Macmillan. https://doi.org/10.1007/978-3-319-54395-6_7Google Scholar
Trouteaud, A. R. (2004). How you ask counts: A test of internet-related components of response rates to a web-based survey. Social Science Computer Review, 22, 385392. https://doi.org/10.1177/0894439304265650Google Scholar
Tuten, T.L., Galesic, M., & Bosnjak, M. (2004). Effects of immediate versus delayed notification of prize draw results on response behavior in web surveys: An experiment. Social Science Computer Review, 22, 377384. https://doi.org/10.1177/0894439304265640Google Scholar
Van Mol, C. (2017). Improving web survey efficiency: the impact of an extra reminder and reminder content on web survey response. International Journal of Social Research Methodology, 20 (4), 317327. https://doi.org/10.1080//13645579.2016.1185255CrossRefGoogle Scholar
VerifyBee (2019). How to fix an invalid email address. VerifyBee, June 10. Available at: https://verifybee.com/how-to-fix-an-invalid-email-address.Google Scholar
Williams, D., Edwards, S., Giambo, P., & Kena, G. (2018). Cost effective mail survey design. In Proceedings of the Federal Committee on Statistical Methodology Research and Policy Conference, Washington, DC, December.Google Scholar

References

Afolabi, M. O., Bojang, K., D’Alessandro, U., et al. (2014). Multimedia informed consent tool for a low literacy African research population: Development and pilot testing. Journal of Clinical Research and Bioethics, 5(3), 178.Google ScholarPubMed
Afolabi, M. O., McGrath, N., D’Alessandro, U., et al. (2015). A multimedia consent tool for research participants in the Gambia: A randomized controlled trial. Bulletin of the World Health Organization, 93(5), 320328A.CrossRefGoogle ScholarPubMed
Allen, A. A., Chen, D. T., Bonnie, R. J., et al. (2017). Assessing informed consent in an opioid relapse prevention study with adults under current or recent criminal justice supervision. Journal of Substance Abuse Treatment, 81, 6672.Google Scholar
Appelbaum, P. S. & Grisso, T. (2001). Macarthur Competence Assessment Tool for Clinical Research (MacCAT-CR). Professional Resource Press/Professional Resource Exchange.Google Scholar
Appelbaum, P. S. & Grisso, T. (2007) Assessment of patients’ competence to consent to treatment. The New England Journal of Medicine, 357(18), 18341840.CrossRefGoogle ScholarPubMed
Appelbaum, P. S., Roth, L. H., & Lidz, C. (1982). The therapeutic misconception: Informed consent in psychiatric research. International Journal of Law and Psychiatry, 5, 319329.Google Scholar
Appelbaum, P. S., Lidz, C. W., & Klitzman, R. (2009). Voluntariness of consent to research: A conceptual model. Hastings Center Report, 39(1), 3039.Google Scholar
Beardsley, E., Jefford, M., & Mileshkin, I. (2007). Longer consent forms for clinical trials compromise patient understanding: So why are they lengthening? Journal of Clinical Oncology, 23(9), e13e14.Google Scholar
Benson, P. R., Roth, L. H., Appelbaum, P. S., Lidz, C. W., & Winslade, W. J. (1988). Information disclosure, subject understanding, and informed consent in psychiatric research. Law and Human Behavior, 12(4), 455475.Google Scholar
Bobb, M. R., Van Heukelom, P. G., Faine, B. A., et al. (2016). Telemedicine provides noninferior research informed consent for remote study enrollment: A randomized controlled trial. Academic Emergency Medicine, 23(7), 759765.Google Scholar
Bowers, N., Eisenberg, E., Montbriand, J., Jaskolka, J., & Roche-Nagle, G. (2017). Using a multimedia presentation to improve patient understanding and satisfaction with informed consent for minimally invasive vascular procedures. The Surgeon, 15(1), 711.Google Scholar
California Legislative Information (2003). Human experimentation, California Health and Safety Code – HSC § 24178. Available at: https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=HSC&division=20.&title=&part=&chapter=1.3.&article.Google Scholar
Carlson, L. (2013). Research ethics and intellectual disability: Broadening the debate. Yale Journal of Biology and Medicine, 86, 303314.Google Scholar
Carpenter, W. T. Jr., Gold, J. M., Lahti, A. C., et al. (2000). Decisional capacity for informed consent in schizophrenia research. Archives of General Psychiatry, 57(6), 533538.CrossRefGoogle ScholarPubMed
Christopher, P. P., Stein, M. D., Springer, S. A., et al. (2016). An exploratory study of therapeutic misconception among incarcerated clinical trial participants. AJOB Empirical Bioethics, 7(1), 2430.Google Scholar
Christopher, P. P., Appelbaum, P. S., Truong, D., et al. (2017). Reducing therapeutic misconception: A randomized intervention trial in hypothetical clinical trials. PLoS ONE, 12(9), e018224.Google Scholar
Coletti, A. S., Heagerty, P., Sheon, A. R., et al. (2003). Randomized, controlled evaluation of a prototype informed consent process for HIV vaccine efficacy trials. Journal of Acquired Immune Deficiency Syndromes, 32(2), 161169.Google Scholar
Davies, A. N., Waghorn, M., Webber, K., et al. (2018). A cluster randomised feasibility trial of clinically assisted hydration in cancer patients in the last days of life. Palliative Medicine, 32(4), 733743.Google Scholar
De Sutter, E., Zace, D., Boccia, S., et al. (2020) Implementation of electronic informed consent in biomedical research and stakeholders” perspectives: Systematic review. Journal of Medical Internet Research, 22(10), e19129.Google Scholar
Dugosh, K. L., Festinger, D. S., Croft, J. R., & Marlowe, D. B. (2010). Measuring coercion to participate in research within a doubly vulnerable population: Initial development of the coercion assessment scale. Journal of Empirical Research on Human Ethics, 5(1), 93102.Google Scholar
Dugosh, K. L., Festinger, D. S., Marlowe, D. B., & Clements, N. T. (2014). Developing an index to measure the voluntariness of consent to research. Journal of Empirical Research on Human Ethics, 9(4), 6070.Google Scholar
Dunn, L. B. & Jeste, D. V. (2001). Enhancing informed consent for research and treatment. Neuropsychopharmacology, 24(6), 595607.Google Scholar
Dunn, L. B., Nowrangi, M. A., Palmer, B. W., Jeste, D. V., & Saks, E. R. (2006). Assessing decisional capacity for clinical research or treatment: A review of instruments. American Journal of Psychiatry, 163(8), 13231334.Google Scholar
Edlund, J. E., Edlund, A. E., & Carey, M. G. (2015). Patient understanding of potential risk and benefit with informed consent in a left ventricular assist device population: A pilot study. Journal of Cardiovascular Nursing, 30(5), 435439.Google Scholar
Electronic Code of Federal Regulations (2018). Protection of Human Subjects, 45 C.F.R. § 46. Available at: www.ecfr.gov/on/2018-07-19/title-45/subtitle-A/subchapter-A/part-46.Google Scholar
Enama, M. E., Hu, Z., Gordon, I., et al. (2012). Randomization to standard and concise informed consent forms: Development of evidence-based consent practices. Contemporary Clinical Trials, 33, 895902.CrossRefGoogle ScholarPubMed
Evans, C. J., Yorganci, E., Lewis, P., et al. (2020). Processes of consent in research for adults with impaired mental capacity nearing the end of life: Systematic review and transparent expert consultation (MORECare_Capacity statement). BMC Medicine, 18(1), 221.Google Scholar
Faden, R. (1996). The Advisory Committee on Human Radiation Experiments. Hastings Center Report, 26(5), 510.CrossRefGoogle ScholarPubMed
Festinger, D. S., Ratanadilok, K., Marlowe, D. B., et al. (2007). Neuropsychological functioning and recall of research consent information among drug court clients. Ethics & Behavior, 17(2), 163186.Google Scholar
Festinger, D., Marlowe, D., Dugosh, K., Croft, J., & Arabia, P. (2008). Higher magnitude cash payments improve research follow-up rates without increasing drug use or perceived coercion. Drug and Alcohol Dependence, 96(1–2), 128135.Google Scholar
Festinger, D. S., Marlowe, D. B., Croft, J. R., et al. (2009). Monetary incentives improve recall of research consent information: It pays to remember. Experimental and Clinical Psychopharmacology, 17(2), 99104.Google Scholar
Festinger, D. S., Dugosh, K. L., Croft, J. R., Arabia, P. L., & Marlowe, D. B. (2010). Corrected feedback: A procedure to enhance recall of informed consent to research among substance abusing offenders. Ethics & Behavior, 20(5), 387399.Google Scholar
Festinger, D. S., Dugosh, K. L., Croft, J. R., Arabia, P. L., & Marlowe, D. B. (2011). Do research intermediaries reduce perceived coercion to enter research trials among criminally involved substance abusers? Ethics & Behavior, 21(3), 252259.Google Scholar
Festinger, D. S., Dugosh, K. L., Marlowe, D. B., & Clements, N. (2014). Achieving new levels of recall in consent to research by combining remedial and motivational techniques. Journal of Medical Ethics, 40(4), 264268.Google Scholar
Fitzgerald, D. W., Marotte, C., Verdier, R. I., Johnson, W. D., & Pape, J. W. (2002). Comprehension during informed consent in a less-developed country. Lancet, 360, 13011302.Google Scholar
Flory, J. & Emanuel, E. (2004). Interventions to improve research participants’ understanding in informed consent for research: A systematic review. Journal of the American Medical Association, 292, 15931601.Google Scholar
Forster, D. G. & Borasky, D. A., Jr. (2018). Adults lacking capacity to give consent: When is it acceptable to include them in research? Therapeutic Innovation & Regulatory Science, 52(3), 275279.Google Scholar
Frost, C. J., Johnson, E. P., Witte, B., et al. (2021). Electronic informed consent information for residual newborn specimen research: Findings from focus groups with diverse populations. Journal of Community Genetics, 12(1), 199203.Google Scholar
Gardner, W., Hoge, S. K., Bennett, N., et al. (1993). Two scales for measuring patients’ perceptions for coercion during mental hospital admission. Behavioral Sciences and the Law, 11(3), 307321.Google Scholar
Gilbert, T., Bosquet, A., Thomas-Anterion, C., Bonnefoy, M., & Le Saux, O. (2017). Assessing capacity to consent for research in cognitively impaired older patients. Clinical Interventions in Aging, 12, 15531563.Google Scholar
Gupta, U. C. (2013). Informed consent in clinical research: Revisiting few concepts and areas. Perspectives in Clinical Research, 4(1), 2632.Google Scholar
Ham, D. Y., Choi, W. S., Song, S. H., et al. (2016). Prospective randomized controlled study on the efficacy of multimedia informed consent for patients scheduled to undergo green-light high-performance system photoselective vaporization of the prostate. World Journal of Men’s Health, 34(1), 4755.CrossRefGoogle Scholar
Ho, P., Downs, J., Bulsara, C., Patman, S., & Hill, A. (2017) Addressing challenges in gaining informed consent for a research study investigating falls in people with intellectual disability. British Journal of Learning Disabilities, 46(2), 92100.Google Scholar
International Military Tribunal (1950). Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law No. 10. Government Printing Office.Google Scholar
Jeste, D. V., Palmer, B. W., Appelbaum, P. S., et al. (2007). A new brief instrument for assessing decisional capacity for clinical research. Archives of General Psychiatry, 64(8), 966974.Google Scholar
Jeste, D. V., Palmer, B. W., Golshan, S., et al. (2009). Multimedia consent for research in people with schizophrenia and normal subjects: A randomized controlled trial. Schizophrenia Bulletin, 35(4), 719729.Google Scholar
Jones, J. H. (1993). Bad Blood: The Tuskegee Syphilis Experiment. The Free Press.Google Scholar
Kass, N., Taylor, H., Ali, J., Hallez, K. & Chaisson, L. (2015). A pilot study of simple interventions to improve informed consent in clinical research: Feasibility, approach, and results. Clinical Trials, 12(1), 5466.Google Scholar
Kim, E. J. & Kim, S. H. (2015). Simplification improves understanding of informed consent information in clinical trials regardless of health literacy level. Clinical Trials, 12(3), 232236.Google Scholar
Kraft, S. A., Constantine, M., Magnus, D., et al. (2017). A randomized study of multimedia informational aids for research on medical practices: Implications for informed consent. Clinical Trials, 14(1), 94102.CrossRefGoogle ScholarPubMed
Kucia, A. M. & Horowitz, J. D. (2000). Is informed consent to clinical trials and “upside selective” process in acute coronary syndromes. American Heart Journal, 140, 9497.Google Scholar
Layman, E. (2009). Human experimentation: Historical perspective of breaches of ethics in U.S. healthcare. The Health Care Manager, 28(4), 354374.Google Scholar
Lepola, P., Needham, A., Mendum, J., et al. (2016). Informed consent for paediatric clinical trials in Europe. Archives of Disease in Childhood, 101(11), 10171025.Google Scholar
Lidz, C. W. (2006). The therapeutic misconception and our models of competency and informed consent. Behavioral Sciences & the Law, 24(4), 535546.Google Scholar
Madeira, J. L. & Andraka-Christou, B. (2016). Paper trials, trailing behind: Improving informed consent to IVF through multimedia approaches. Journal of Law and the Biosciences, 3(1), 238.Google Scholar
Maloy, J. W. & Bass, P. F. (2020). Understanding broad consent. Ochsner Journal, 20(1), 8186.Google Scholar
Martel, M. L., Klein, L. R., Miner, J. R., et al. (2018). A brief assessment of capacity to consent instrument in acutely intoxicated emergency department patients. American Journal of Emergency Medicine, 36, 1823.CrossRefGoogle ScholarPubMed
Martin-Kerry, J., Bower, P., Young, B., et al. (2017). Developing and evaluating multimedia information resources to improve engagement of children, adolescents, and their parents with trials (TRECA study): Study protocol for a series of linked randomised controlled trials. Trials, 18, 265.Google Scholar
Matsui, K., Lie, R. K., Turin, T. C., & Kita, Y. (2012). A randomized controlled trial of short and standard-length consent for a genetic cohort study: Is longer better? Journal of Epidemiology, 22, 308316.Google Scholar
Milgram, S. (1974). Obedience to Authority. Harper Collins.Google Scholar
Miller, C. M., Searight, H. R., Grable, D., et al. (1994). Comprehension and recall of the informational content of the informed consent document: An evaluation of 168 patients in a controlled clinical trial. Journal of Clinical Research and Drug Development, 8(4), 237248.Google Scholar
Morán-Sánchez, I., Luna, A., & Pérez-Cárceles, M. D. (2016). Enhancing the informed consent process in psychiatric outpatients with a brief computer-based method. Psychiatry Research, 245, 354360.Google Scholar
Moser, D. J., Arndt, S., Kanz, J. E., et al. (2004). Coercion and informed consent in research involving prisoners. Comprehensive Psychiatry, 45(1), 19.Google Scholar
Muir, K. W. & Lee, P. P. (2009). Literacy and informed consent: A case for literacy screening in glaucoma research. Archives of Ophthalmology, 127(5), 698699Google Scholar
National Bioethics Advisory Commission. (1998). Research involving persons with mental disorders that may affect decision making capacity. Available at: https://bioethicsarchive.georgetown.edu/nbac/capacity/TOC.htm.Google Scholar
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979). The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research. US Department of Health and Human Services.Google Scholar
National Institutes of Health (2009). Research involving individuals with questionable capacity to consent: Points to consider. Available at: https://grants.nih.gov/grants/policy/questionablecapacity.htm.Google Scholar
Neilson, G., Chaimowitz, G., & Zuckerberg, J. (2015). Informed consent to treatment in psychiatry. Canadian Journal of Psychiatry, 60, 112.Google Scholar
Nishimura, A., Carey, J., Erwin, P. J., et al. (2013). Improving understanding in the research informed consent process: A systematic review of 54 interventions tested in randomized control trials. BMC Medical Ethics, 14, 28.Google Scholar
Palmer, B. W., Dunn, L. B., Appelbaum, P. S., et al. (2005). Assessment of capacity to consent to research among older persons with schizophrenia, Alzheimer’s disease, or diabetes mellitus. Archives of General Psychiatry, 62(7), 726733.Google Scholar
Reiser, S. J. & Knudson, P. (1993). Protecting research subjects after consent: The case for the research intermediary. IRB, 15(2), 1011.Google Scholar
Rosique, I., Pérez-Cárceles, M. D., Romero-Martín, M., Osuna, E., & Luna, A. (2006). The use and usefulness of information for patients undergoing anaesthesia. Medicine and Law, 25(4), 715727.Google ScholarPubMed
Rothwell, E., Wong, B., Rose, N. C., et al. (2014). A randomized controlled trial of an electronic informed consent process. Journal of Empirical Research on Human Research Ethics, 9(5), 1–7.Google Scholar
Ryan, C. (2018). Computer and Internet use in the United States: 2016. American Community Survey Reports. Available at: www.census.gov/library/publications/2018/acs/acs-39.html.Google Scholar
Seaman, J.B., Terhorst, L., Gentry, A., et al. (2015). Psychometric properties of a decisional capacity screening tool for individuals contemplating participation in Alzheimer’s disease research. Journal of Alzheimer’s Disease, 46(1), 19.Google Scholar
Sheridan, R., Martin-Kerry, J., Watt, I., et al. (2019). User testing digital, multimedia information to inform children, adolescents and their parents about healthcare trials. Journal of Child Health Care, 23(3), 468482.Google Scholar
Shuster, E. (1997). Fifty years later: The significance of the Nuremberg Code. The New England Journal of Medicine, 337(20), 14361440.Google Scholar
Siu, J. M., Rotenberg, B. W., Franklin, J. H., & Sowerby, L. J. (2016). Multimedia in the informed consent process for endoscopic sinus surgery: A randomized control trial. Laryngoscope, 126(6), 12731278.Google Scholar
Sonne, S. C., Andrews, J. O., Gentilin, S. M., et al. (2013). Development and pilot testing of a video-assisted informed consent process. Contemporary Clinical Trials, 36(1), 2531.Google Scholar
Spencer, S. P., Stoner, M. J., Kelleher, K., & Cohen, D. M. (2015). Using a multimedia presentation to enhance informed consent in a pediatric emergency department. Pediatric Emergency Care, 31(8), 572576.CrossRefGoogle Scholar
Stiles, P. G., Poythress, N. G., Hall, A., Falkenbach, D., & Williams, R. (2001). Improving understanding of research content disclosures among persons with mental illness. Psychiatric Services, 52, 780785.Google Scholar
Stroup, S. & Appelbaum, P. (2003). The subject advocate: Protecting the interests of participants with fluctuating decision making capacity. IRB, 25(3), 911.Google Scholar
Stunkel, L., Benson, M., McLellan, L., et al. (2010). Comprehension and informed consent: Assessing the effect of a short consent form. IRB: Ethics & Human Research, 32(4), 19.Google Scholar
Taub, H. A. & Baker, M. T. (1983). The effect of repeated testing upon comprehension of informed consent materials by elderly volunteers. Experimental Aging Research, 9, 135138.Google Scholar
Taub, H. A., Kline, G. E., & Baker, M. T. (1981). The elderly and informed consent: Effects of vocabulary level and corrected feedback. Experimental Aging Research, 7, 137146.Google Scholar
Tipotsch-Maca, S. M., Varsits, R. M., Ginzel, C., & Vescei-Marlovits, P. V. (2016). Effect of a multimedia-assisted informed consent procedure on the information gain, satisfaction, and anxiety of cataract surgery patients. Journal of Cataract and Refractive Surgery, 42(1), 110116.Google Scholar
US Department of Education (2019). Adult literacy in the United States. Available at: https://nces.ed.gov/pubs2019/2019179/index.asp.Google Scholar
US Food and Drug Administration (2016). Use of electronic informed consent: Questions and answers. Available at: www.fda.gov/regulatory-information/search-fda-guidance-documents/use-electronic-informed-consent-clinical-investigations-questions-and-answers.Google Scholar
Welch, B. M., Marshall, E., Qanungo, S., et al. (2016). Teleconsent: A novel approach to obtain informed consent for research. Contemporary Clinical Trials Communications, 15(3), 7479.CrossRefGoogle Scholar
Westra, A. E. & de Beaufort, I. (2015). Improving the Helsinki Declaration’s guidance on research in incompetent subjects. Journal of Medical Ethics, 41, 278280.Google Scholar
Winter, M., Kam, J., Nalavenkata, S., et al. (2016). The use of portable video media vs standard verbal communication in the urological consent process: A multicenter, randomised controlled, crossover trial. BJU International, 118(5), 823828.Google Scholar
Wirshing, D. A., Wirshing, W. C., Marder, S. R., Liberman, R. P., & Mintz, J. (1998). Informed consent: Assessment of comprehension. American Journal of Psychiatry, 155, 15081511.Google Scholar
World Medical Association (2013). World Medical Association Declaration of Helsinki: Ethical principles for medical research involving human subjects. Journal of the American Medical Association, 310, 21912194.CrossRefGoogle Scholar

References

Argyris, C. (1968). Some unintended consequences of rigorous research. Psychological Bulletin, 70(3, Pt.1), 185197. https://doi.org/10.1037/h0026145Google Scholar
Asklaksen, P. M., Myrbakk, I. N., Hoifodt, R., S., & Flaten, M. A. (2007). The effect of experimenter gender on autonomic and subjective responses to pain stimuli. Pain, 129(3), 260268.Google Scholar
Atwood, S., Mehr, S. A., & Schachner, A. (2020). Expectancy effects threaten the inferential validity of synchrony-prosociality research [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/zjy8uCrossRefGoogle Scholar
Bartlett, F. C. (1932). Remembering: A Study in Experimental and Social Psychology. Cambridge University Press.Google Scholar
Benstead, L. J. (2014). Does interviewer religious dress affect survey responses? Evidence from Morocco. Politics and Religion, 7(4), 734760. https://doi.org/10.1017/S1755048314000455Google Scholar
Bishop, D. V. M. (2020). The psychology of experimental psychologists: Overcoming cognitive constraints to improve research. Quarterly Journal of Experimental Psychology, 73(1), 119. https://doi.org/10.1177/1747021819886519Google Scholar
Brophy, J. E. & Good, T. L. (1970). Teachers’ communication of differential expectations for children’s classroom performance: Some behavioral data. Journal of Educational Psychology, 61(5), 365374. https://doi.org/10.1037/h0029908Google Scholar
Dehaene, S. & Cohen, L. (2011). The unique role of the visual word form area in reading. Trends in Cognitive Sciences, 15(6), 254262. https://doi.org/10.1016/j.tics.2011.04.003Google Scholar
De Vries, Y. A., Roest, A. M., de Jonge, P., et al. (2018). The cumulative effect of reporting and citation biases on the apparent efficacy of treatments: The case of depression. Psychological Medicine, 48, 24532455. doi:10.1017/S0033291718001873CrossRefGoogle ScholarPubMed
Duyx, B., Urlings, M. J. E., Swaen, G. M. H., Bouter, L. M., & Zeegers, M. P. (2017). Scientific citations favor positive results: A systematic review and meta-analysis. Journal of Clinical Epidemiology, 88, 92101. https://doi.org/10.1016/j.jclinepi.2017.06.002Google Scholar
Eden, D. (1984). Self-fulfilling prophecy as a management tool: Harnessing Pygmalion. The Academy of Management Review 9(1), 64.Google Scholar
Edlund, J. E., Cuccolo, K., Irgens, M. S., Wagge, J. R., & Zlokovich, M. S. (2021). Saving science through replication studies. Perspectives on Psychological Science, 17(1), 216225. https://doi.org/10.1177/1745691620984385Google Scholar
Edlund, J. E., Lange, K. M. Sevene, A. M., et al. (2017). Participant crosstalk: Issues when using the Mechanical Turk. The Quantitative Methods in Psychology, 13 (3), 174182.Google Scholar
Edlund, J.E., Sagarin, B.J, Skowronski, J.J., Johnson, S., & Kutter, J. (2009). Whatever happens in the laboratory stays in the laboratory: The prevalence and prevention of participant crosstalk. Personality and Social Psychology Bulletin, 35, 635642.Google Scholar
Ferguson, C. J. (2015). Pay no attention to that data behind the curtain: On angry birds, happy children, scholarly squabbles, publication bias, and why betas rule metas. Perspectives on Psychological Science, 10(5), 683691. https://doi.org/10.1177/1745691615593353Google Scholar
Fischhoff, B. & Beyth, R. (1975). “I knew it would happen”: Remembered probabilities of once-future things. Organizational Behavior & Human Performance, 13(1), 116. https://doi.org/10.1016/0030-5073(75)90002-1Google Scholar
Forster, K. L. (2000). The potential for experimenter bias effects in word recognition experiments. Memory and Cognition, 28, 11091115.Google Scholar
French, J. R. P. (1953), Experiments in field settings. In Festinger, L. & Katz, D. (eds.), Research Methods in the Behavioral Sciences (pp. 98135), Holt, Rinehart and Winston.Google Scholar
Friese, M. & Frankenbach, J. (2020). p-Hacking and publication bias interact to distort meta-analytic effect size estimates. Psychological Methods, 25(4), 456471.CrossRefGoogle ScholarPubMed
Granberg, D. & Holmberg, S. (1992). The Hawthorne effect in election studies: The impact of survey participation on voting. British Journal of Political Science, 22(2), 240247.Google Scholar
Hart, W., Albarracín, D., Eagly, A. H., et al. (2009). Feeling validated versus being correct: A meta-analysis of selective exposure to information. Psychological Bulletin, 135(4), 555588.Google Scholar
Haslam, N., Loughnan, S., & Perry, G. (2014). Meta-Milgram: An empirical synthesis of the obedience experiments. PloS One, 9(4), e93927. https://doi.org/10.1371/journal.pone.0093927Google Scholar
Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLoS Biology, 13, e1002106.Google Scholar
Hilton, J. L. & von Hippel, W. (1996). Stereotypes. Annual Review of Psychology, 47(1), 237. https://doi.org/10.1146/annurev.psych.47.1.237CrossRefGoogle ScholarPubMed
Holman, L., Head, M. L., Lanfear, R., & Jennions, M. D. (2015). Evidence of experimental bias in the life sciences: Why we need blind data recording. PLoS Biology, 13(7). https://doi.org/10.1371/journal.pbio.1002190Google Scholar
Howe, L. C., Goyer, J. P., & Crum, A. J. (2017). Harnessing the placebo effect: Exploring the influence of physician characteristics on placebo response. Health Psychology, 36(11), 10741082. https://doi.org/10.1037/hea0000499Google Scholar
Hoyt, W. T. (2000). Rater bias in psychological research: When is it a problem and what can we do about it? Psychological Methods, 5, 6486. doi:10.1037//1082-9S9X.5.1.64Google Scholar
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524532. https://doi.org/10.1177/0956797611430953Google Scholar
Kállai, I., Barke, A., & Voss, U. (2004). The effects of experimenter characteristics on pain reports in women and men. Pain, 112(1), 142147. https://doi.org/10.1016/j.pain.2004.08.008Google Scholar
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196217.Google Scholar
Klecka, H., Johnston, I., Bowman, N. D., & Green, C. S. (2021). Researchers’ commercial video game knowledge associated with differences in beliefs about the impact of gaming on human behavior. Entertainment Computing, 38, 100406. https://doi.org/10.1016/j.entcom.2021.100406Google Scholar
Klein, O., Doyen, S., Leys, C., et al. (2012). Low hopes, high expectations: Expectancy effects and the replicability of behavioral experiments. Perspectives on Psychological Science, 7, 572584.Google Scholar
Levine, F. M. & De Simone, L. L. (1991). The effects of experimenter gender on pain report in male and female subjects. Pain, 44, 6972.Google Scholar
Marx, D. M. & Goff, P. A. (2005). Clearing the air: The effect of experimenter race on target’s test performance and subjective experience. British Journal of Social Psychology, 44, 645657.Google Scholar
McCallum, E. B. & Peterson, Z. D. (2015). Effects of experimenter contact, setting, inquiry mode, and race on women’s self-report of sexual attitudes and behaviors: An experimental study. Archives of Sexual Behavior, 44, 22872297.CrossRefGoogle ScholarPubMed
McCambridge, J. & Day, M. (2007). Randomized controlled trial of the effects of completing the Alcohol Use Disorders Identification Test questionnaire on self-reported hazardous drinking. Addiction, 103, 241248Google Scholar
McCambridge, J., Witton, J., & Elbourne, D. R. (2014). Systematic review of the Hawthorne effect: New concepts are needed to study research participation effects. Journal of Clinical Epidemiology, 67(3), 267277. https://doi.org/10.1016/j.jclinepi.2013.08.015Google Scholar
Meier, A., Domahidi, E., & Günther, E. (2020). Computer-Mediated Communication and Mental Health: A Computational Scoping Review of an Interdisciplinary Field. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190932596.013.4.Google Scholar
Milgram, S. (1963). Behavioral study of obedience. The Journal of Abnormal and Social Psychology, 67(4), 371378. https://doi.org/10.1037/h0040525Google Scholar
Modic-Stanke, K. & Ivanec, D. (2016). Pain threshold: Measure of pain sensitivity or social behavior? Psihologija, 49(1), 3750. https://doi.org/10.2298/PSI1601037 MGoogle Scholar
Morris, D., Fraser, S., & Wormald, R. (2007). Masking is better than blinding. BMJ: British Medical Journal (International Edition), 334(7597), 799.Google Scholar
Munafò, M. R., Nosek, B. A., Bishop, D. V. M., et al. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 0021. https://doi.org/10.1038/s41562-016-0021Google Scholar
Murray, M., Swan, A. V., Kiryluk, S., & Clarke, G. C. (1988). The Hawthorne effect in the measurement of adolescent smoking. Journal of Epidemiology & Community Health, 142, 304306.Google Scholar
Nichols, A. L. & Edlund, J. E. (2015). Practicing what we preach (and sometimes study): Methodological issues in experimental laboratory research. Review of General Psychology, 19(2), 191202.Google Scholar
Nichols, A. L. & Maner, J. K. (2008). The good-subject effect: Investigating participant demand characteristics. Journal of General Psychology, 135(2), 151165.Google Scholar
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175220. doi:10.1037/1089-2680.2.2.175Google Scholar
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2017). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 26002606. https://doi.org/10.1073/pnas.1708274114Google Scholar
Nosek, B. A. & Lakens, D. (2014). Registered Reports: a method to increase the credibility of published results. Journal of Social Psychology, 45, 137141.CrossRefGoogle Scholar
Orne, M.T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776783.Google Scholar
Pfungst, O. (1911). Clever Hans (The Horse of Mr. von Osten). Holt, Rinehart, & Winston,.Google Scholar
Rennung, M. & Göritz, A. S. (2016). Prosocial consequences of interpersonal synchrony: A meta-analysis. Zeitschrift für Psychologie, 22 4(3), 168189. https://doi.org/10.1027/2151-2604/a000252CrossRefGoogle Scholar
Rogers, L. J., Wilson, K. G., Gohm, C. L., & Merwin, R. M. (2007). Revisiting written disclosure: The effects of warm versus cold experimenters. Journal of Social and Clinical Psychology, 26(5), 556574. https://doi.org/10.1521/jscp.2007.26.5.556CrossRefGoogle Scholar
Rosenthal, R. (1963). On the social psychology of the psychological experiment: The experimenter’s hypothesis as unintended determinant of experimental results. American Scientist, 51(2), 268283.Google Scholar
Rosenthal, R. (1973). On the Social Psychology of the Self-Fulfilling Prophecy: Further Evidence for Pygmalion Effect and Their Mediating Mechanisms. MMS Modular Publications.Google Scholar
Rosenthal, R. (1997). Interpersonal Expectancy Effects: A Forty Year Perspective. SAGE Publications.Google Scholar
Rosenthal, R. & Fode, K. (1963). Psychology of the scientist: V. Three experiments in experimenter bias. Psychological Reports, 12, 491511.CrossRefGoogle Scholar
Rosenthal, R. & Jacobson, L. (1968). Pygmalion in the Classroom: Teacher Expectation and Pupils’ Intellectual Development. Holt, Rinehart and Winston.Google Scholar
Rosenthal, R. & Rosnow, R. L. (2008). Essentials of Behavioral Research: Methods and Data Analysis, 3rd ed. McGraw-Hill.Google Scholar
Rosenthal, R. & Rosnow, R. L. (2009). Artifacts in Behavioral Research, 2nd ed. Oxford University Press.Google Scholar
Rosenzweig, S. (1933). The experimental situation as a psychological problem. Psychological Review, 40(4), 337354. doi:10.1037/h0074916Google Scholar
Saretsky, G. (1972). The OEO P.C. experiment and the John Henry effect. The Phi Delta Kappan, 53(9), 579581.Google Scholar
Shaywitz, S. E., Mody, M., & Shaywitz, B. A. (2006). Neural mechanisms in dyslexia. Current Directions in Psychological Science, 15(6), 278281. https://doi.org/10.1111/j.1467-8721.2006.00452.xGoogle Scholar
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 13591366. https://doi.org/10.1177/0956797611417632Google Scholar
Spring, B. & Alexander, B. L. (1989) Sugar and hyperactivity: another look. In Shepherd, R. (ed.), Handbook of the Psychophysiology of Human Eating (pp. 231–249). Wiley.Google Scholar
Strickland, B. & Suben, A. (2012). Experimenter philosophy: The problem of experimenter bias in experimental philosophy. Review of Philosophy and Psychology, 3(3), 457467CrossRefGoogle Scholar
Thorson, K. R., Mendes, W. B., & West, T. V. (2019). Controlling the uncontrolled: Are there incidental experimenter effects on physiologic responding? Psychophysiology, 57, e13500. https://doi.org/10.1111/psyp.13500Google Scholar
Tuyttens, F. A. M., de Graaf, S., Heerkens, J. L. T., et al. (2014). Observer bias in animal behavior research: Can we believe what we score, if we score what we believe? Animal Behaviour, 90, 273280. http://dx.doi.org/10.1016/j.anbehav.2014.02.007Google Scholar
Vallier, H. & Timmerman, C. (2008). Clinical trials and the reorganization of medical research in post-Second World War Britain. Medical History, 52(4), 493510.Google Scholar
Vicente, K. J. & Brewer, W. F. (1993). Reconstructive remembering of the scientific literature. Cognition, 46, 101128.Google Scholar
Wason, P. C. (1968). Reasoning about a rule. The Quarterly Journal of Experimental Psychology, 20(3), 273281. https://doi.org/10.1080/14640746808400161Google Scholar
Weinstein, R. S. (2018). Pygmalion at 50: Harnessing its power and application in schooling. Educational Research and Evaluation, 24(3–5), 346365.Google Scholar
Winchester, C. L. & Salji, M. (2016). Writing a literature review. Journal of Clinical Urology, 9(5), 309312.Google Scholar
Yarkoni, T. (2022). The generalizability crisis. Behavioral and Brain Sciences, 45, e1. https://doi.org/10.1017/S0140525X20001685Google Scholar

Further Reading

For a detailed example of a funnel debriefing procedure and the empirical test of various post-experimental practices including suspicion probing, we recommend the following article:

For further discussion of the history and progression of manipulation checks as well as specific recommendations for their use, we recommend Table 4 in the following article:

We are proponents of manipulation checks (with the proper precautions), but criticisms of manipulation checks should be seriously considered. For further reading on critiques of manipulation check practices we recommend the following article:

Blackhart, G. C., Brown, K. E., Clark, T., Pierce, D. L., & Shell, K. (2012). Assessing the adequacy of postexperimental inquiries in deception re-search and the factors that promote participant honesty. Behavior Research Methods, 44, 24–40. https://doi.org/10.3758/s13428-011-0132-6Google Scholar
Ejelöv, E. & Luke, T. (2020). “Rarely safe to assume”: Evaluating the use and interpretation of manipulation checks in experimental social psychology. Journal of Experimental Social Psychology, 87, 103937. https://doi.org/10.1016/j.jesp.2019.103937Google Scholar
Hauser, D., Ellsworth, P., & Gonzalez, R. (2018). Are manipulation checks necessary? Frontiers in Psychology, 9, 998. https://doi.org/10.3389/fpsyg.2018.00998Google Scholar

References

Adair, J., Dushenko, T., & Lindsay, R. (1985). Ethical regulations and their impact on research practice. The American Psychologist, 40, 5972. https://doi.org/10.1037//0003-066X.40.1.59Google Scholar
American Psychological Association (2017). Ethical principles of psychologists and code of conduct (2002, amended effective June 1, 2010, and January 1, 2017). Available at: www.apa.org/ethics/code.Google Scholar
American Sociological Association (2018). Code of ethics. Available at:www.asanet.org/sites/default/files/asa_code_of_ethics-june2018a.pdf.Google Scholar
Aronson, E. (1966). Avoidance of inter-subject communication. Psychological Reports, 19, 238. https://doi.org/10.2466/pr0.1966.19.1.238Google Scholar
Aronson, E., Wilson, T. D., & Brewer, M. B. (1998). Experimentation in social psychology. In Gilbert, D. T., Fiske, S. T., & Lindzey, G. (eds.), The Handbook of Social Psychology (pp. 99142). McGraw-Hill.Google Scholar
Bargh, B. A. & Chartrand, T. L. (2000). The mind in the middle: A practical guide to priming and automaticity research. In Reis, H. T. & Judd, C. M. (eds.), Handbook of Research Methods in Social and Personality Psychology (pp. 253285). Cambridge University Press.Google Scholar
Blackhart, G. C., Brown, K. E., Clark, T., Pierce, D. L., & Shell, K. (2012). Assessing the adequacy of postexperimental inquiries in deception research and the factors that promote participant honesty. Behavior Research Methods, 44, 2440. https://doi.org/10.3758/s13428-011-0132-6Google Scholar
Brody, J. L., Gluck, J., & Aragon, A. S. (2000). Participants’ understanding of the process of psychological research: Debriefing. Ethics and Behavior, 10, 1325, https://doi.org/10.1207/S15327019EB1001_2CrossRefGoogle ScholarPubMed
Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46, 112130. http://dx.doi.org/10.3758/s13428-013-0365-7Google Scholar
Chester, D. S. & Lasko, E. N. (2021). Construct validation of experimental manipulations in social psychology: Current practices and recommendations for the future. Perspectives on Psychological Science, 16, 377395. https://doi.org/10.1177/1745691620950684Google Scholar
Clark, T. D. (2013). Using social influence to enhance post-experimental inquiry success (unpublished Master’s thesis). University of North Dakota, Grand Forks, ND.Google Scholar
Cook, T. D. & Perrin, B. F. (1971). The effects of suspiciousness of deception and the perceived legitimacy of deception on task performance in an attitude change experiment. Journal of Personality, 39, 204224. https://doi.org/10.1111/j.1467-6494.1971.tb00037.xGoogle Scholar
Cronbach, L. & Meehl, P. (1955). Construct validity in psychological tests. Psychological Bulletin, 52, 281302. https://doi.org/10.1037/h0040957CrossRefGoogle ScholarPubMed
Diener, E., Matthews, R., & Smith, R. E. (1972). Leakage of experimental information to potential future subjects by debriefed participants. Journal of Experimental Research in Personality, 6, 264267.Google Scholar
Edlund, J. E., Sagarin, B. J., Skowronski, J. J., Johnson, S., & Kutter, J. (2009). Whatever happens in the laboratory stays in the laboratory: The prevalence and prevention of participant crosstalk. Personality and Social Psychology Bulletin, 35, 635642. https://doi.org/10.1177/0146167208331255Google Scholar
Edlund, J. E., Nichols, A. L., Okdie, B. M., (2014). The prevalence and prevention of crosstalk: A multi-institutional study. The Journal of Social Psychology, 154, 181185. https://doi.org/10.1080/00224545.2013.872596Google Scholar
Edlund, J. E., Lange, K. M., Sevene, A. M., et al. (2017). Participant crosstalk: Issues when using the Mechanical Turk. Tutorials in Quantitative Methods for Psychology, 13, 174182. http://doi.org/10.20982/tqmp.13.3.p174Google Scholar
Edlund, J. E., Cuccolo, K., Irgens, M. S., Wagge, J. R., & Zlokovich, M. S. (2022). Saving science through replication studies. Perspectives on Psychological Science, 17(1), 216225. https://doi.org/10.1177/1745691620984385Google Scholar
Ejelöv, E. & Luke, T. (2020). “Rarely safe to assume”: Evaluating the use and interpretation of manipulation checks in experimental social psychology. Journal of Experimental Social Psychology, 87, 103937. https://doi.org/10.1016/j.jesp.2019.103937CrossRefGoogle Scholar
Fayant, M.-P., Sigall, H., Lemonnier, A., Retsin, E., & Alexopoulos, T. (2017). On the limitations of manipulation checks: An obstacle toward cumulative science. International Review of Social Psychology, 30, 125130. https://doi.org/10.5334/irsp.102Google Scholar
Forgas, J. P. & East, R. (2008). On being happy and gullible: Mood effects on skepticism and the detection of deception. Journal of Experimental Social Psychology, 44, 13621367. https://doi.org/10.1016/j.jesp.2008.04.010Google Scholar
Golding, S. L. & Lichtenstein, E. (1970). Confession of awareness and prior knowledge of deception as a function of interview set and approval motivation. Journal of Personality and Social Psychology, 14, 213223. https://doi.org/10.1037/h0028853Google Scholar
Hauser, D. J. & Schwarz, N. (2016). Attentive Turkers: MTurk participants perform better on online attention checks than do subject pool participants. Behavior Research Methods, 48, 400407. https://doi.org/10.3758/s13428-015-0578-zGoogle Scholar
Hauser, D., Ellsworth, P., & Gonzalez, R. (2018). Are manipulation checks necessary? Frontiers in Psychology, 9, 998. https://doi.org/10.3389/fpsyg.2018.00998CrossRefGoogle ScholarPubMed
Hertwig, R. & Ortmann, A. (2008). Deception in experiments: Revisiting the arguments in its defense. Ethics and Behavior, 18, 5992. https://doi.org/10.1080/10508420701712990CrossRefGoogle Scholar
Holmes, D. S. (1976). Debriefing after psychological experiments: I. Effectiveness of postdeception dehoaxing. American Psychologist, 31, 858867. https://doi.org/10.1037/0003-066X.31.12.858Google Scholar
Junk, T. R. & Lyons, L. (2021). Reproducibility and replication of experimental particle physics results. PsyArXiv. https://arxiv.org/abs/2009.06864.Google Scholar
Kees, J., Berry, C., Burton, S., & Sheehan, K. (2017). An analysis of data quality: Professional panels, student subject pools, and Amazon’s Mechanical Turk. Journal of Advertising, 46, 141155. https://doi.org/10.1080/00913367.2016.1269304Google Scholar
Keltner, D., Locke, K. D., & Audrain, P. C. (1993). The influence of attributions on the relevance of negative feelings to personal satisfaction. Personality and Social Psychology Bulletin, 19, 2129. https://doi.org/10.1177/0146167293191003Google Scholar
Kühnen, U. (2010). Manipulation checks as manipulation: Another look at the ease-of-retrieval heuristic. Personality and Social Psychology Bulletin, 36, 4758. https://doi.org/10.1177/0146167209346746Google Scholar
Lerman, C., Trock, B., Rimer, B. K., et al. (1991). Psychological side effects of breast cancer screening. Health Psychology, 10, 259267. https://doi.org/10.1037/0278-6133.10.4.259Google Scholar
Levy, L. (1967). Awareness, learning, and the beneficent subject as expert witness. Journal of Personality and Social Psychology, 6, 363370.Google Scholar
Lichtenstein, E. (1970). “Please don’t talk to anyone about this experiment”: Disclosure of deception by debriefed subjects. Psychological Reports, 26, 485486.Google Scholar
McFarland, C., Cheam, A., & Buehler, R. (2007). The perseverance effect in the debriefing paradigm: Replication and extension. Journal of Experimental Social Psychology, 43, 233240. https://doi.org/10.1016/j .jesp.2006.01.010Google Scholar
McMillen, D. & Austin, J. (1971). Effect of positive feedback on compliance following transgression. Psychonomic Science, 24, 5961. https://doi.org/10.3758/BF03337892Google Scholar
Meade, A. W. & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17(3), 437455. https://doi.org/10.1037/a0028085Google Scholar
Miketta, S. & Friese, M. (2019). Debriefed but still troubled? About the (in)effectiveness of postexperimental debriefings after ego threat. Journal of Personality and Social Psychology, 117, 282309. https://doi.org/10.1037/pspa0000155Google Scholar
Milgram, S. (1963). Behavioral study of obedience. The Journal of Abnormal and Social Psychology, 67, 371378. https://doi.org/10.1037/h0040525CrossRefGoogle ScholarPubMed
National Communication Association (2017). A code of professional ethics for the communication scholar/teacher. Available at: www.natcom.org/sites/default/files/pages/1999_Public_Statements_A_Code_of_Professional_Ethics_for_%20the_Communication_Scholar_Teacher_November.pdf.Google Scholar
Necka, E., Cacioppo, S., Norman, G., & Cacioppo, J. (2016). Measuring the prevalence of problematic respondent behaviors among MTurk, campus, and community participants. PloS One, 11(6), e0157732. https://doi.org/10.1371/journal.pone.0157732Google Scholar
Newberry, B. H. (1973). Truth telling in subjects with information about experiments: Who is being deceived? Journal of Personality and Social Psychology, 25, 369374. https://doi.org/10.1037/h0034229CrossRefGoogle Scholar
Nichols, A. & Edlund, J. (2020): Why don’t we care more about carelessness? Understanding the causes and consequences of careless participants, International Journal of Social Research Methodology, 23, 525638. https://doi.org/10.1080/13645579.2020.1719618Google Scholar
Nichols, A. L. & Maner, J. (2008). The good-subject effect: Investigating participant demand characteristics. The Journal of General Psychology, 135, 151165. https://doi.org/10:3200/GENP.1352.151-t66Google Scholar
Nuijten, M. B., Hartgerink, C. H., Van Assen, M. A., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior Research Methods, 48, 12051226. https://doi.org/10.3758/s13428-015-0664-2Google Scholar
Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349(6251). https://doi.org/10.1126/science.aac4716Google Scholar
Oppenheimer, D., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks: Detecting satisficing to increase statistical power. Journal of Experimental Social Psychology, 45, 867872. https://doi.org/10.1016/j.jesp.2009.03.009Google Scholar
Orne, M. T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776783. https://doi.org/10.1037/h0043424Google Scholar
Ortmann, A. & Hertwig, R. (2002). The costs of deception: Evidence from psychology. Experimental Economics, 5, 111131. https://doi.org/10.1023/A: 1020365204768Google Scholar
Rubin, M. (2017). When does HARKing hurt? Identifying when different types of undisclosed post hoc hypothesizing harm scientific progress. Review of General Psychology, 21, 308320. https://doi.org/10.1037/gpr0000128Google Scholar
Sagarin, B. J., Rhoads, K. v. L., & Cialdini, R. B. (1998). Deceiver‘s distrust: Denigration as a consequence of undiscovered deception. Personality and Social Psychology Bulletin, 24, 11671176. https://doi.org/10.1177/01461672982411004Google Scholar
Sharpe, D. & Faye, C. (2009). A second look at debriefing practices: Madness in our method? Ethics & Behavior, 19, 432447. https://doi.org/10.1080/10508420903035455Google Scholar
Shimp, T. A., Hyatt, E. M., & Snyder, D. J. (1991). A critical appraisal of demand artifacts in consumer research. The Journal of Consumer Research, 18, 273283. https://doi.org/10.1086/209259Google Scholar
Sigall, H. & Mills, J. (1998). Measures of independent variables and mediators are useful in social psychology experiments: But are they necessary? Personality and Social Psychology Review, 2, 218226. https://doi.org/10.1207/s15327957pspr0203_5Google Scholar
Taylor, K. & Sheppard, J. (1996). Probing suspicion among participants in deception research. American Psychologist, 51, 886887. https://doi.org/10.1037/0003-066X.51.8.886Google Scholar
Tesch, F. E. (1977). Debriefing research participants: Though this be method there is madness to it. Journal of Personality and Social Psychology, 35, 217224. https://doi.org/10.1037/0022-3514.35.4.217CrossRefGoogle Scholar
Walsh, W. B. & Stillman, S. M. (1974). Disclosure of deception by debriefed subjects. Journal of Counseling Psychology, 21, 315319. https://doi.org/10.1037/h0036683Google Scholar
Wilson, T. D., Aronson, E., & Carlsmith, K. (2010). The art of laboratory experimentation. In Fiske, S. T., Gilbert, D. T., & Lindzey, G. (eds.), Handbook of Social Psychology, 4th ed. (vol. 1, pp. 5181). Wiley.Google Scholar
Zadvinskis, I. M. & Melnyk, B. M. (2019). Making a case for replication studies and reproducibility to strengthen evidence‐based practice. Worldviews on Evidence-Based Nursing, 16(1), 23. https://doi.org/ezproxy.library.und.edu/10.1111/wvn.12349Google Scholar
Zannella, L., Vahedi, Z., & Want, S. (2020). What do undergraduate students learn from participating in psychological research? Teaching of Psychology, 47, 121129. https://doi.org/10.1177/0098628320901379Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×