Skip to main content Accessibility help
×
Hostname: page-component-76fb5796d-x4r87 Total loading time: 0 Render date: 2024-04-29T16:14:41.530Z Has data issue: false hasContentIssue false

11 - Experimenter Effects

from Part II - The Building Blocks of a Study

Published online by Cambridge University Press:  25 May 2023

Austin Lee Nichols
Affiliation:
Central European University, Vienna
John Edlund
Affiliation:
Rochester Institute of Technology, New York
Get access

Summary

As social and behavioral scientists, it is of fundamental importance to understand the factors that drive the behaviors that we measure. Careful design is thus required to minimize the influence of extraneous factors. Yet, we often overlook one major class of such extraneous factors – those related to us, the experimenters. Experimenter effects can potentially arise at every step in the research process – from the selection of hypotheses, to interacting with research participants in ways that might alter their behavior, to biases in data interpretation. While such experimenter-driven effects often occur without notice, and without ill intent, they nonetheless threaten the replicability and generalizability of research. In this chapter, we discuss when and how such effects arise, preventative measures that can be taken to reduce their influence, and methods for accounting for such effects, when appropriate.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2023

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Argyris, C. (1968). Some unintended consequences of rigorous research. Psychological Bulletin, 70(3, Pt.1), 185197. https://doi.org/10.1037/h0026145CrossRefGoogle ScholarPubMed
Asklaksen, P. M., Myrbakk, I. N., Hoifodt, R., S., & Flaten, M. A. (2007). The effect of experimenter gender on autonomic and subjective responses to pain stimuli. Pain, 129(3), 260268.Google Scholar
Atwood, S., Mehr, S. A., & Schachner, A. (2020). Expectancy effects threaten the inferential validity of synchrony-prosociality research [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/zjy8uCrossRefGoogle Scholar
Bartlett, F. C. (1932). Remembering: A Study in Experimental and Social Psychology. Cambridge University Press.Google Scholar
Benstead, L. J. (2014). Does interviewer religious dress affect survey responses? Evidence from Morocco. Politics and Religion, 7(4), 734760. https://doi.org/10.1017/S1755048314000455Google Scholar
Bishop, D. V. M. (2020). The psychology of experimental psychologists: Overcoming cognitive constraints to improve research. Quarterly Journal of Experimental Psychology, 73(1), 119. https://doi.org/10.1177/1747021819886519CrossRefGoogle ScholarPubMed
Brophy, J. E. & Good, T. L. (1970). Teachers’ communication of differential expectations for children’s classroom performance: Some behavioral data. Journal of Educational Psychology, 61(5), 365374. https://doi.org/10.1037/h0029908Google Scholar
Dehaene, S. & Cohen, L. (2011). The unique role of the visual word form area in reading. Trends in Cognitive Sciences, 15(6), 254262. https://doi.org/10.1016/j.tics.2011.04.003CrossRefGoogle ScholarPubMed
De Vries, Y. A., Roest, A. M., de Jonge, P., et al. (2018). The cumulative effect of reporting and citation biases on the apparent efficacy of treatments: The case of depression. Psychological Medicine, 48, 24532455. doi:10.1017/S0033291718001873CrossRefGoogle ScholarPubMed
Duyx, B., Urlings, M. J. E., Swaen, G. M. H., Bouter, L. M., & Zeegers, M. P. (2017). Scientific citations favor positive results: A systematic review and meta-analysis. Journal of Clinical Epidemiology, 88, 92101. https://doi.org/10.1016/j.jclinepi.2017.06.002CrossRefGoogle ScholarPubMed
Eden, D. (1984). Self-fulfilling prophecy as a management tool: Harnessing Pygmalion. The Academy of Management Review 9(1), 64.CrossRefGoogle Scholar
Edlund, J. E., Cuccolo, K., Irgens, M. S., Wagge, J. R., & Zlokovich, M. S. (2021). Saving science through replication studies. Perspectives on Psychological Science, 17(1), 216225. https://doi.org/10.1177/1745691620984385CrossRefGoogle ScholarPubMed
Edlund, J. E., Lange, K. M. Sevene, A. M., et al. (2017). Participant crosstalk: Issues when using the Mechanical Turk. The Quantitative Methods in Psychology, 13 (3), 174182.CrossRefGoogle Scholar
Edlund, J.E., Sagarin, B.J, Skowronski, J.J., Johnson, S., & Kutter, J. (2009). Whatever happens in the laboratory stays in the laboratory: The prevalence and prevention of participant crosstalk. Personality and Social Psychology Bulletin, 35, 635642.Google Scholar
Ferguson, C. J. (2015). Pay no attention to that data behind the curtain: On angry birds, happy children, scholarly squabbles, publication bias, and why betas rule metas. Perspectives on Psychological Science, 10(5), 683691. https://doi.org/10.1177/1745691615593353Google Scholar
Fischhoff, B. & Beyth, R. (1975). “I knew it would happen”: Remembered probabilities of once-future things. Organizational Behavior & Human Performance, 13(1), 116. https://doi.org/10.1016/0030-5073(75)90002-1Google Scholar
Forster, K. L. (2000). The potential for experimenter bias effects in word recognition experiments. Memory and Cognition, 28, 11091115.CrossRefGoogle ScholarPubMed
French, J. R. P. (1953), Experiments in field settings. In Festinger, L. & Katz, D. (eds.), Research Methods in the Behavioral Sciences (pp. 98135), Holt, Rinehart and Winston.Google Scholar
Friese, M. & Frankenbach, J. (2020). p-Hacking and publication bias interact to distort meta-analytic effect size estimates. Psychological Methods, 25(4), 456471.CrossRefGoogle ScholarPubMed
Granberg, D. & Holmberg, S. (1992). The Hawthorne effect in election studies: The impact of survey participation on voting. British Journal of Political Science, 22(2), 240247.CrossRefGoogle Scholar
Hart, W., Albarracín, D., Eagly, A. H., et al. (2009). Feeling validated versus being correct: A meta-analysis of selective exposure to information. Psychological Bulletin, 135(4), 555588.CrossRefGoogle ScholarPubMed
Haslam, N., Loughnan, S., & Perry, G. (2014). Meta-Milgram: An empirical synthesis of the obedience experiments. PloS One, 9(4), e93927. https://doi.org/10.1371/journal.pone.0093927Google Scholar
Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLoS Biology, 13, e1002106.Google Scholar
Hilton, J. L. & von Hippel, W. (1996). Stereotypes. Annual Review of Psychology, 47(1), 237. https://doi.org/10.1146/annurev.psych.47.1.237Google Scholar
Holman, L., Head, M. L., Lanfear, R., & Jennions, M. D. (2015). Evidence of experimental bias in the life sciences: Why we need blind data recording. PLoS Biology, 13(7). https://doi.org/10.1371/journal.pbio.1002190Google Scholar
Howe, L. C., Goyer, J. P., & Crum, A. J. (2017). Harnessing the placebo effect: Exploring the influence of physician characteristics on placebo response. Health Psychology, 36(11), 10741082. https://doi.org/10.1037/hea0000499Google Scholar
Hoyt, W. T. (2000). Rater bias in psychological research: When is it a problem and what can we do about it? Psychological Methods, 5, 6486. doi:10.1037//1082-9S9X.5.1.64CrossRefGoogle Scholar
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524532. https://doi.org/10.1177/0956797611430953CrossRefGoogle ScholarPubMed
Kállai, I., Barke, A., & Voss, U. (2004). The effects of experimenter characteristics on pain reports in women and men. Pain, 112(1), 142147. https://doi.org/10.1016/j.pain.2004.08.008Google Scholar
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196217.CrossRefGoogle ScholarPubMed
Klecka, H., Johnston, I., Bowman, N. D., & Green, C. S. (2021). Researchers’ commercial video game knowledge associated with differences in beliefs about the impact of gaming on human behavior. Entertainment Computing, 38, 100406. https://doi.org/10.1016/j.entcom.2021.100406CrossRefGoogle Scholar
Klein, O., Doyen, S., Leys, C., et al. (2012). Low hopes, high expectations: Expectancy effects and the replicability of behavioral experiments. Perspectives on Psychological Science, 7, 572584.Google Scholar
Levine, F. M. & De Simone, L. L. (1991). The effects of experimenter gender on pain report in male and female subjects. Pain, 44, 6972.Google Scholar
Marx, D. M. & Goff, P. A. (2005). Clearing the air: The effect of experimenter race on target’s test performance and subjective experience. British Journal of Social Psychology, 44, 645657.Google Scholar
McCallum, E. B. & Peterson, Z. D. (2015). Effects of experimenter contact, setting, inquiry mode, and race on women’s self-report of sexual attitudes and behaviors: An experimental study. Archives of Sexual Behavior, 44, 22872297.CrossRefGoogle ScholarPubMed
McCambridge, J. & Day, M. (2007). Randomized controlled trial of the effects of completing the Alcohol Use Disorders Identification Test questionnaire on self-reported hazardous drinking. Addiction, 103, 241248Google Scholar
McCambridge, J., Witton, J., & Elbourne, D. R. (2014). Systematic review of the Hawthorne effect: New concepts are needed to study research participation effects. Journal of Clinical Epidemiology, 67(3), 267277. https://doi.org/10.1016/j.jclinepi.2013.08.015CrossRefGoogle ScholarPubMed
Meier, A., Domahidi, E., & Günther, E. (2020). Computer-Mediated Communication and Mental Health: A Computational Scoping Review of an Interdisciplinary Field. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190932596.013.4.Google Scholar
Milgram, S. (1963). Behavioral study of obedience. The Journal of Abnormal and Social Psychology, 67(4), 371378. https://doi.org/10.1037/h0040525Google Scholar
Modic-Stanke, K. & Ivanec, D. (2016). Pain threshold: Measure of pain sensitivity or social behavior? Psihologija, 49(1), 3750. https://doi.org/10.2298/PSI1601037 MGoogle Scholar
Morris, D., Fraser, S., & Wormald, R. (2007). Masking is better than blinding. BMJ: British Medical Journal (International Edition), 334(7597), 799.Google Scholar
Munafò, M. R., Nosek, B. A., Bishop, D. V. M., et al. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 0021. https://doi.org/10.1038/s41562-016-0021CrossRefGoogle ScholarPubMed
Murray, M., Swan, A. V., Kiryluk, S., & Clarke, G. C. (1988). The Hawthorne effect in the measurement of adolescent smoking. Journal of Epidemiology & Community Health, 142, 304306.Google Scholar
Nichols, A. L. & Edlund, J. E. (2015). Practicing what we preach (and sometimes study): Methodological issues in experimental laboratory research. Review of General Psychology, 19(2), 191202.CrossRefGoogle Scholar
Nichols, A. L. & Maner, J. K. (2008). The good-subject effect: Investigating participant demand characteristics. Journal of General Psychology, 135(2), 151165.Google Scholar
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175220. doi:10.1037/1089-2680.2.2.175Google Scholar
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2017). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 26002606. https://doi.org/10.1073/pnas.1708274114Google Scholar
Nosek, B. A. & Lakens, D. (2014). Registered Reports: a method to increase the credibility of published results. Journal of Social Psychology, 45, 137141.Google Scholar
Orne, M.T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776783.CrossRefGoogle Scholar
Pfungst, O. (1911). Clever Hans (The Horse of Mr. von Osten). Holt, Rinehart, & Winston,.Google Scholar
Rennung, M. & Göritz, A. S. (2016). Prosocial consequences of interpersonal synchrony: A meta-analysis. Zeitschrift für Psychologie, 22 4(3), 168189. https://doi.org/10.1027/2151-2604/a000252Google Scholar
Rogers, L. J., Wilson, K. G., Gohm, C. L., & Merwin, R. M. (2007). Revisiting written disclosure: The effects of warm versus cold experimenters. Journal of Social and Clinical Psychology, 26(5), 556574. https://doi.org/10.1521/jscp.2007.26.5.556Google Scholar
Rosenthal, R. (1963). On the social psychology of the psychological experiment: The experimenter’s hypothesis as unintended determinant of experimental results. American Scientist, 51(2), 268283.Google ScholarPubMed
Rosenthal, R. (1973). On the Social Psychology of the Self-Fulfilling Prophecy: Further Evidence for Pygmalion Effect and Their Mediating Mechanisms. MMS Modular Publications.Google Scholar
Rosenthal, R. (1997). Interpersonal Expectancy Effects: A Forty Year Perspective. SAGE Publications.Google Scholar
Rosenthal, R. & Fode, K. (1963). Psychology of the scientist: V. Three experiments in experimenter bias. Psychological Reports, 12, 491511.CrossRefGoogle Scholar
Rosenthal, R. & Jacobson, L. (1968). Pygmalion in the Classroom: Teacher Expectation and Pupils’ Intellectual Development. Holt, Rinehart and Winston.CrossRefGoogle Scholar
Rosenthal, R. & Rosnow, R. L. (2008). Essentials of Behavioral Research: Methods and Data Analysis, 3rd ed. McGraw-Hill.Google Scholar
Rosenthal, R. & Rosnow, R. L. (2009). Artifacts in Behavioral Research, 2nd ed. Oxford University Press.Google Scholar
Rosenzweig, S. (1933). The experimental situation as a psychological problem. Psychological Review, 40(4), 337354. doi:10.1037/h0074916Google Scholar
Saretsky, G. (1972). The OEO P.C. experiment and the John Henry effect. The Phi Delta Kappan, 53(9), 579581.Google Scholar
Shaywitz, S. E., Mody, M., & Shaywitz, B. A. (2006). Neural mechanisms in dyslexia. Current Directions in Psychological Science, 15(6), 278281. https://doi.org/10.1111/j.1467-8721.2006.00452.xGoogle Scholar
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 13591366. https://doi.org/10.1177/0956797611417632CrossRefGoogle ScholarPubMed
Spring, B. & Alexander, B. L. (1989) Sugar and hyperactivity: another look. In Shepherd, R. (ed.), Handbook of the Psychophysiology of Human Eating (pp. 231–249). Wiley.Google Scholar
Strickland, B. & Suben, A. (2012). Experimenter philosophy: The problem of experimenter bias in experimental philosophy. Review of Philosophy and Psychology, 3(3), 457467Google Scholar
Thorson, K. R., Mendes, W. B., & West, T. V. (2019). Controlling the uncontrolled: Are there incidental experimenter effects on physiologic responding? Psychophysiology, 57, e13500. https://doi.org/10.1111/psyp.13500Google Scholar
Tuyttens, F. A. M., de Graaf, S., Heerkens, J. L. T., et al. (2014). Observer bias in animal behavior research: Can we believe what we score, if we score what we believe? Animal Behaviour, 90, 273280. http://dx.doi.org/10.1016/j.anbehav.2014.02.007Google Scholar
Vallier, H. & Timmerman, C. (2008). Clinical trials and the reorganization of medical research in post-Second World War Britain. Medical History, 52(4), 493510.CrossRefGoogle Scholar
Vicente, K. J. & Brewer, W. F. (1993). Reconstructive remembering of the scientific literature. Cognition, 46, 101128.Google Scholar
Wason, P. C. (1968). Reasoning about a rule. The Quarterly Journal of Experimental Psychology, 20(3), 273281. https://doi.org/10.1080/14640746808400161Google Scholar
Weinstein, R. S. (2018). Pygmalion at 50: Harnessing its power and application in schooling. Educational Research and Evaluation, 24(3–5), 346365.CrossRefGoogle Scholar
Winchester, C. L. & Salji, M. (2016). Writing a literature review. Journal of Clinical Urology, 9(5), 309312.CrossRefGoogle Scholar
Yarkoni, T. (2022). The generalizability crisis. Behavioral and Brain Sciences, 45, e1. https://doi.org/10.1017/S0140525X20001685Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×