Hostname: page-component-7c8c6479df-94d59 Total loading time: 0 Render date: 2024-03-19T03:23:05.613Z Has data issue: false hasContentIssue false

A Bayesian decision-making framework for replication

Published online by Cambridge University Press:  27 July 2018

Tom E. Hardwicke
Affiliation:
Meta-Research Innovation Center at Stanford (METRICS), Stanford School of Medicine, Stanford University, Stanford, CA 94305. tom.hardwicke@stanford.eduhttps://tomhardwicke.netlify.com/
Michael Henry Tessler
Affiliation:
Department of Psychology, Stanford University, Stanford, CA 94305. mhtessler@stanford.edubpeloqui@stanford.edumcfrank@stanford.eduhttp://stanford.edu/~mtessler/https://benpeloquin7.github.io/https://web.stanford.edu/~mcfrank/
Benjamin N. Peloquin
Affiliation:
Department of Psychology, Stanford University, Stanford, CA 94305. mhtessler@stanford.edubpeloqui@stanford.edumcfrank@stanford.eduhttp://stanford.edu/~mtessler/https://benpeloquin7.github.io/https://web.stanford.edu/~mcfrank/
Michael C. Frank
Affiliation:
Department of Psychology, Stanford University, Stanford, CA 94305. mhtessler@stanford.edubpeloqui@stanford.edumcfrank@stanford.eduhttp://stanford.edu/~mtessler/https://benpeloquin7.github.io/https://web.stanford.edu/~mcfrank/

Abstract

Replication is the cornerstone of science – but when and why? Not all studies need replication, especially when resources are limited. We propose that a decision-making framework based on Bayesian philosophy of science provides a basis for choosing which studies to replicate.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2018 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Anderson, C. J., Bahník, Š., Barnett-Cowan, M., Bosco, F. A., Chandler, J., Chartier, C. R., Cheung, F., Christopherson, C. D., Cordes, A., Cremata, E. J., Della Penna, N., Estel, V., Fedor, A., Fitneva, S. A., Frank, M. C., Grange, J. A., Hartshorne, J. K., Hasselman, F., Henninger, F., van der Hulst, M., Jonas, K. J., Lai, C. K., Levitan, C. A., Miller, J. K., Moore, K. S., Meixner, J. M., Munafò, M. R., Neijenhuijs, K. I., Nilsonne, G., Nosek, B. A., Plessow, F., Prenoveau, J. M., Ricker, A. A., Schmidt, K., Spies, J. R., Steiger, S., Strohminger, N., Sullivan, G. B., van Aert, R. C. M., van Assen, M. A. L. M., Vanpaemel, W., Vianello, M., Voracek, M. & Zuni, K. (2016) Response to comment on “estimating the reproducibility of psychological science.Science 351(6277):1037.Google Scholar
Baribault, B., Donkin, C., Little, D. R., Trueblood, J. S., Oravecz, Z., van Ravenzwaaij, D., White, C. N., De Boeck, P. & Vandekerckhove, J. (2018) Metastudies for robust tests of theory. Proceedings of the National Academy of Sciences of the United States of America 115(11):2607–12. Available at: http:/doi.org/10.1073/pnas.1708285114.Google Scholar
Borenstein, M., Hedges, L. V., Higgins, J. P. T. & Rothstein, H. R. (2009) Introduction to meta-analysis. Wiley.Google Scholar
Gilbert, D. T., King, G., Pettigrew, S, & Wilson, T. D. (2016) Comment on “estimating the reproducibility of psychological science.Science 351(6277):1037. Available at: http://doi.org/10.1126/science.aad7243.Google Scholar
Ioannidis, J. P. & Trikalinos, T. A. (2007) An exploratory test for an excess of significant findings. Clinical Trials 4:245–53.Google Scholar
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B. Jr., Bahník, S., Bernstein, M. J., Bocian, K., Brandt, M. J., Brooks, B., Brumbaugh, C. C., Cemalcilar, Z., Chandler, J., Cheong, W., Davis, W. E., Devos, T., Eisner, M., Frankowska, N., Furrow, D., Galliani, E. M., Hasselman, F., Hicks, J. A., Hovermale, J. F., Hunt, S. J., Huntsinger, J. R., IJerzman, H., John, M.-S., Joy-Gaba, J. A., Kappes, H. B., Krueger, L. E., Kurtz, J., Levitan, C. A., Mallett, R. K., Morris, W. L., Nelson, A. J., Nier, J. A., Packard, G., Pilati, R., Rutchick, A. M., Schmidt, K., Skorinko, J. L., Smith, R., Steiner, T. G., Storbeck, J., Van Swol, L. M., Thompson, D., van't Veer, A. E., Vaughn, L. A., Vranka, M., Wichman, A. L., Woodzicka, J. A. & Nosek, B. A. (2014c). Theory building through replication: Response to commentaries on the “Many Labs” replication project. Social Psychology 45(4):299311.Google Scholar
Lewis, M. L. & Frank, M. C. (2016) Understanding the effect of social context on learning: A replication of Xu and Tenenbaum (2007b) Journal of Experimental Psychology: General 145:e7280.Google Scholar
MacKay, D. J. (1992) Information-based objective functions for active data selection. Neural Computation 4(4):590604.Google Scholar
Mullen, B., Muellerleile, P. & Bryant, B. (2001) Cumulative meta-analysis: A consideration of indicators of sufficiency and stability. Personality and Social Psychology Bulletin 27(11):1450–62.Google Scholar
Phillips, J., Ong, D. C., Surtees, A. D. R., Xin, Y., Williams, S., Saxe, R. & Frank, M. C. (2015) A second look at automatic theory of mind. Psychological Science 26(9):1353–67.Google Scholar
Platt, J. R. (1964) Strong inference. Science 146(3642):347–53.Google Scholar
Simonsohn, U., Nelson, L. D. & Simmons, J. P. (2014) P-curve: A key to the file-drawer. Journal of Experimental Psychology: General 143(2):534–47.Google Scholar
Stodden, V., McNutt, M., Bailey, D. H., Deelman, E., Gil, Y., Hanson, B., Heroux, M. A., Ioannidis, J. P. & Taufer, M. (2016) Enhancing reproducibility for computational methods. Science 354(6317):1240–41.Google Scholar
Strevens, M. (2006) The bayesian approach to the philosophy of science: In: Encyclopedia of philosophy, ed. Borchert, D. M., pp. 495502. Macmillan Reference.Google Scholar