Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-gtxcr Total loading time: 0 Render date: 2024-04-19T08:40:03.068Z Has data issue: false hasContentIssue false

Chapter 19 - Transparent Science

A More Credible, Reproducible, and Publishable Way to Do Science

from Part IV - Systemic Issues

Published online by Cambridge University Press:  19 November 2018

Robert J. Sternberg
Affiliation:
Cornell University, New York
Get access
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2018

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Alsheikh-Ali, A. A., Qureshi, W., Al-Mallah, M. H., & Ioannidis, J. P. A. (2011). Public availability of published research data in high-impact journals. PLoS ONE, 6(9), e24357. https://doi.org/10.1371/journal.pone.0024357CrossRefGoogle ScholarPubMed
Anderson, M., & Magruder, J. (2017). Split-sample strategies for avoiding false discoveries. w23544. Cambridge, MA: National Bureau of Economic Research. doi:10.3386/w23544Google Scholar
Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample-size planning for more accurate statistical power: A method adjusting sample effect sizes for publication bias and uncertainty. Psychological Science, 28(11), 15471562. doi:10.1177/0956797617723724Google Scholar
Begley, C. G., & Ellis, L. M. (2012). Drug development: Raise standards for preclinical cancer research. Nature, 483(7391), 531533. doi:10.1038/483531aGoogle Scholar
Bem, D. J. (2004). Writing the empirical journal article. In Zanna, M. P. & Darley, J. M. (Eds.), The compleat academic: A practical guide for the beginning social scientist (2nd edn., pp. 185219). Washington, DC: American Psychological Association.Google Scholar
Board of Governors of the Federal Reserve System, Chang, A. C., & Li, P. (2015). Is economics research replicable? Sixty published papers from thirteen journals say “usually not.” Finance and Economics Discussion Series, 2015(83), 126. doi:10.17016/FEDS.2015.083Google Scholar
Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, , Emma, S. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365376. doi:10.1038/nrn3475CrossRefGoogle ScholarPubMed
Claerbout, J. (1994). Seventeen years of super computing and other problems in seismology. Paper presented at the National Research Council meeting on High Performance Computing in Seismology. http://sepwww.stanford.edu/sep/jon/nrc.htmlGoogle Scholar
Crampton, E. W. (1947). The growth of the odontoblast of the incisor teeth as a criterion of vitamin C intake of the guinea pig. Journal of Nutrition, 33(5), 491504. http://jn.nutrition.org/content/33/5/491.full.pdfCrossRefGoogle ScholarPubMed
de Groot, A. (1969). Methodology: Foundations of inference and research in the behavioral sciences. The Hague: Mouton.Google Scholar
Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203) 15021505. doi:10.1126/science.1255484Google Scholar
Hoenig, J. M., & Heisey, D. M. (2001). The abuse of power. American Statistician, 55(February), 1924. doi:10.1198/000313001300339897Google Scholar
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196217. doi:10.1207/s15327957pspr0203_4CrossRefGoogle ScholarPubMed
Lakens, D. (2014a). Performing high-powered studies efficiently with sequential analyses: Sequential analyses. European Journal of Social Psychology, 44(7), 701710. doi:10.1002/ejsp.2023CrossRefGoogle Scholar
Lakens, D. (2014b, December 19). The 20% statistician: Observed power, and what to do if your editor asks for post-hoc power analyses. Retrieved November 11, 2017, from https://web.archive.org/web/20170711191030/http://daniellakens.blogspot.com/2014/12/observed-power-and-what-to-do-if-your.htmlGoogle Scholar
Lakens, (2017). Equivalence tests: A practical primer for tests, correlations, and meta-analyses. Social Psychological and Personality Science, 8(4), 355362. doi:10.1177/1948550617697177CrossRefGoogle ScholarPubMed
Lin, W., & Green, D. P. (2016). Standard operating procedures: A safety net for pre-analysis plans. PS: Political Science & Politics, 49(3), 495500. doi:10.1017/S1049096516000810Google Scholar
Lindsay, S., Simons, D., & Lilienfeld, S. (2016). Research preregistration 101. Observer, 29(10), 1416.Google Scholar
Lithgow, G. J., Driscoll, M., & Phillips, P. (2017). A long journey to reproducible results. Nature, 548(7668) 387388. doi:10.1038/548387aGoogle Scholar
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., … Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 14221425. doi:10.1126/science.aab2374CrossRefGoogle ScholarPubMed
Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. doi:10.1126/science.aac4716Google Scholar
Pashler, H., & Wagenmakers, E.-J. (2012). Editors’ introduction to the special section on replicability in psychological science: A crisis of confidence? Perspectives on Psychological Science, 7(6), 528530. doi:10.1177/1745691612465253CrossRefGoogle Scholar
Pereira, T. V., & Ioannidis, J. P. A. (2011). Statistically significant meta-analyses of clinical trials have modest credibility and inflated effects. Journal of Clinical Epidemiology, 64(10), 10601069. doi:10.1016/j.jclinepi.2010.12.012Google Scholar
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86, 638641. doi:10.1037/0033-2909.86.3.638Google Scholar
Savage, C. J., & Vickers, A. J. (2009). Empirical study of data sharing by authors publishing in PLoS journals. Ed. Chris Mavergames. PLoS ONE, 4(9), e7078. doi:10.1371/journal.pone.0007078Google Scholar
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 13591366. doi:10.1177/0956797611417632CrossRefGoogle ScholarPubMed
Simons, D. J., Shoda, Y., & Lindsay, D. S. (2017). Constraints on generality (COG): A proposed addition to all empirical papers. Perspectives on Psychological Science, August, 174569161770863. doi:10.1177/1745691617708630Google Scholar
Vazire, S. (2017). Quality uncertainty erodes trust in Science. Collabra: Psychology, 3(1), 1. doi:10.1525/collabra.74CrossRefGoogle Scholar
Vazire, S. (2018). Implications of the credibility revolution for productivity, creativity, and progress. Perspectives on Psychological Science, 13(4), 411–17.Google Scholar
Veer, A. E. van 't, & Giner-Sorolla, R. (2016). Pre-registration in social psychology – A discussion and suggested template. Journal of Experimental Social Psychology, 67(November), 212. doi:10.1016/j.jesp.2016.03.004Google Scholar
Vines, T. H., Albert, A. Y. K., Andrew, R. L., Débarre, F., Bock, D. G., Franklin, M. T., … Rennison, D. J. (2014). The availability of research data declines rapidly with article age. Current Biology, 24(1), 9497. doi.org/10.1016/j.cub.2013.11.014Google Scholar
Wagenmakers, E.-J., Dutilh, G., & Sarafoglou, A. (2018). The creativity-verification cycle in psychological science: New methods to combat old idols. PsyArXiv. doi:10.17605/OSF.IO/37NTPGoogle Scholar
Wagenmakers, E.-J., Love, J., Marsman, M., Jamil, T., Ly, A., Verhagen, J., Selker, R., et al. (2017). Bayesian inference for psychology. Part II: Example Applications with JASP. Psychonomic Bulletin & Review, July. doi:10.3758/s13423-017-1323-7Google Scholar
Wicherts, J. M., Borsboom, D., Kats, J., & Molenaar, D. (2006). The poor availability of psychological research data for reanalysis. American Psychologist, 61(7), 726728. doi:10.1037/0003-066X.61.7.72Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×