Skip to main content Accessibility help
×
Home

Direct replications in the era of open sampling

  • Gabriele Paolacci (a1) and Jesse Chandler (a2) (a3)

Abstract

Data collection in psychology increasingly relies on “open populations” of participants recruited online, which presents both opportunities and challenges for replication. Reduced costs and the possibility to access the same populations allows for more informative replications. However, researchers should ensure the directness of their replications by dealing with the threats of participant nonnaiveté and selection effects.

Copyright

References

Hide All
Arechar, A. A., Kraft-Todd, G. T. & Rand, D. G. (2017) Turking overtime: How participant characteristics and behavior vary over time and day on Amazon Mechanical Turk. Journal of the Economic Science Association 3(1):111.
Casey, L., Chandler, J., Levine, A. S., Proctor, A. & Strolovitch, D. Z. (2017, April–June) Intertemporal differences among MTurk worker demographics. SAGE Open, 115. doi: 10.1177/2158244017712774.
Chandler, J., Mueller, P. & Paolacci, G. (2014) Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods 46(1):112–30.
Chandler, J., Paolacci, G., Peer, E., Mueller, P. & Ratliff, K. A. (2015) Using nonnaive participants can reduce effect sizes. Psychological Science 26(7):1131–39.
Chandler, J. & Shapiro, D. (2016) Conducting clinical research using crowdsourced convenience samples. Annual Review of Clinical Psychology 12:5381.
DeVoe, S. E. & House, J. (2016). Replications with MTurkers who are naïve versus experienced with academic studies: A comment on Connors, Khamitov, Moroz, Campbell, and Henderson (2015). Journal of Experimental Social Psychology 67:6567.
Difallah, D., Filatova, E. & Ipeirotis, P. (2018) Demographics and dynamics of mechanical Turk workers. In: Proceedings of WSDM 2018: The Eleventh ACM International Conference on Web Search and Data Mining, Marina Del Rey, CA, USA February 5–9, 2018, pp. 135143. Available at: https://dl.acm.org/citation.cfm?doid=3159652.3159661.
Goodman, J. K. & Paolacci, G. (2017) Crowdsourcing consumer research. Journal of Consumer Research 44(1):196210.
Krupnikov, Y. & Levine, A. S. (2014). Cross-sample comparisons and external validity. Journal of Experimental Political Science 1(1), 5980.
Peer, E., Vosgerau, J. & Acquisti, A. (2014) Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavior Research Methods 46(4):1023–31.
Simonsohn, U. (2015) Small telescopes: Detectability and the evaluation of replication results. Psychological Science 26:559–69.
Stewart, N., Chandler, J. & Paolacci, G. (2017) Crowdsourcing samples in cognitive science. Trends in Cognitive Sciences 21(10):736–48.
Stewart, N., Ungemach, C., Harris, A. J., Bartels, D. M., Newell, B. R., Paolacci, G. & Chandler, J. (2015). The average laboratory samples a population of 7,300 Amazon Mechanical Turk workers. Judgment and Decision Making 10(5):479–91.
Thomson, K. S. & Oppenheimer, D. M. (2016) Investigating an alternate form of the cognitive reflection test. Judgment and Decision Making 11(1):99113.
Zwaan, R. A., Pecher, D., Paolacci, G., Bouwmeester, S., Verkoeijen, P., Dijkstra, K. & Zeelenberg, R. (2017) Participant nonnaiveté and the reproducibility of cognitive psychology. Psychonomic Bulletin and Review. Available at: http://doi.org/10.3758/s13423-017-1348-y.

Metrics

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed