Crossref Citations
This article has been cited by the following publications. This list is generated based on data provided by
Crossref.
Kliegl, Oliver
Pastötter, Bernhard
and
Bäuml, Karl-Heinz T.
2020.
Does Amount of Pre-cue Encoding Modulate Selective List Method Directed Forgetting?.
Frontiers in Psychology,
Vol. 11,
Issue. ,
Karaca, Meltem
Kurpad, Nayantara
Wilford, Miko M.
and
Davis, Sara D.
2020.
Too much of a good thing: frequent retrieval can impair immediate new learning.
Memory,
Vol. 28,
Issue. 10,
p.
1181.
Chin, Jason M.
and
Zeiler, Kathryn
2021.
Replicability in Empirical Legal Research.
Annual Review of Law and Social Science,
Vol. 17,
Issue. 1,
p.
239.
Vohs, Kathleen D.
Schmeichel, Brandon J.
Lohmann, Sophie
Gronau, Quentin F.
Finley, Anna J.
Ainsworth, Sarah E.
Alquist, Jessica L.
Baker, Michael D.
Brizi, Ambra
Bunyi, Angelica
Butschek, Grant J.
Campbell, Collier
Capaldi, Jonathan
Cau, Chuting
Chambers, Heather
Chatzisarantis, Nikos L. D.
Christensen, Weston J.
Clay, Samuel L.
Curtis, Jessica
De Cristofaro, Valeria
del Rosario, Kareena
Diel, Katharina
Doğruol, Yasemin
Doi, Megan
Donaldson, Tina L.
Eder, Andreas B.
Ersoff, Mia
Eyink, Julie R.
Falkenstein, Angelica
Fennis, Bob M.
Findley, Matthew B.
Finkel, Eli J.
Forgea, Victoria
Friese, Malte
Fuglestad, Paul
Garcia-Willingham, Natasha E.
Geraedts, Lea F.
Gervais, Will M.
Giacomantonio, Mauro
Gibson, Bryan
Gieseler, Karolin
Gineikiene, Justina
Gloger, Elana M.
Gobes, Carina M.
Grande, Maria
Hagger, Martin S.
Hartsell, Bethany
Hermann, Anthony D.
Hidding, Jasper J.
Hirt, Edward R.
Hodge, Josh
Hofmann, Wilhelm
Howell, Jennifer L.
Hutton, Robert D.
Inzlicht, Michael
James, Lily
Johnson, Emily
Johnson, Hannah L.
Joyce, Sarah M.
Joye, Yannick
Kaben, Jan Helge
Kammrath, Lara K.
Kelly, Caitlin N.
Kissell, Brian L.
Koole, Sander L.
Krishna, Anand
Lam, Christine
Lee, Kelemen T.
Lee, Nick
Leighton, Dana C.
Loschelder, David D.
Maranges, Heather M.
Masicampo, E. J.
Mazara, Kennedy
McCarthy, Samantha
McGregor, Ian
Mead, Nicole L.
Mendes, Wendy B.
Meslot, Carine
Michalak, Nicholas M.
Milyavskaya, Marina
Miyake, Akira
Moeini-Jazani, Mehrad
Muraven, Mark
Nakahara, Erin
Patel, Krishna
Petrocelli, John V.
Pollak, Katja M.
Price, Mindi M.
Ramsey, Haley J.
Rath, Maximilian
Robertson, Jacob A.
Rockwell, Rachael
Russ, Isabella F.
Salvati, Marco
Saunders, Blair
Scherer, Anne
Schütz, Astrid
Schmitt, Kristin N.
Segerstrom, Suzanne C.
Serenka, Benjamin
Sharpinskyi, Konstantyn
Shaw, Meaghan
Sherman, Janelle
Song, Yu
Sosa, Nicholas
Spillane, Kaitlyn
Stapels, Julia
Stinnett, Alec J.
Strawser, Hannah R.
Sweeny, Kate
Theodore, Dominic
Tonnu, Karine
van Oldenbeuving, Yasmijn
vanDellen, Michelle R.
Vergara, Raiza C.
Walker, Jasmine S.
Waugh, Christian E.
Weise, Feline
Werner, Kaitlyn M.
Wheeler, Craig
White, Rachel A.
Wichman, Aaron L.
Wiggins, Bradford J.
Wills, Julian A.
Wilson, Janie H.
Wagenmakers, Eric-Jan
and
Albarracín, Dolores
2021.
A Multisite Preregistered Paradigmatic Test of the Ego-Depletion Effect.
Psychological Science,
Vol. 32,
Issue. 10,
p.
1566.
Youyou, Wu
Yang, Yang
and
Uzzi, Brian
2023.
A discipline-wide investigation of the replicability of Psychology papers over the past two decades.
Proceedings of the National Academy of Sciences,
Vol. 120,
Issue. 6,
Youyou, Wu
Yang, Yang
and
Uzzi, Brian
2023.
Reply to Crockett et al. and Mottelson and Kontogiorgos: Machine learning’s scientific significance and future impact on replicability research.
Proceedings of the National Academy of Sciences,
Vol. 120,
Issue. 33,
Romero, Verónica
Ramos Dearcos, Carlyla
and
Suescum Coelho, Car-Emyr
2025.
Impacto potencial del paternalismo libertario en la toma de decisiones empresariales: una perspectiva cognitiva.
Revista Multidisciplinar Epistemología de las Ciencias,
Vol. 2,
Issue. 2,
p.
604.
As two old(er) researchers who were involved early in the current science reform movement (pro-reform, to the chagrin of many of our peers), we believe that the target article barely addresses an essential point about the “replication crisis”: In a very short time, the resulting reform movement, including all of the fuss, anger, and passion that it generated, has been enormously useful in raising standards and improving methods in psychological science. Rather than believing that the field is still in crisis, some highly influential members of our community recently announced that psychology is now experiencing a “renaissance” (Nelson et al. Reference Nelson, Simmons and Simonsohn2018). One of us calls what has happened a civil war–like revolution (Spellman Reference Spellman2015), suggesting an insurrection in which one group overthrows the structures put in place by another group. But here we use the term “reformation,” suggesting that the profession has become enlightened and changed itself for the better.
The reform movement has prompted changes across the entire research and publication process. As a result, experimental results are more reliable because researchers are increasing sample sizes. Researchers are posting methods, data, and analysis plans (sometimes encouraged by journals), thus promoting more accurate replications and vetting of data integrity. Researchers are pre-registering hypotheses and journals are pre-accepting registered reports, making conclusions more credible. Also the experimental record is more complete because of preprint services, open access journals, and the increasing publication of replication studies. Therefore, we believe that the reformation's success results from actions by individuals, journals, and societies, combined with various environmental factors (e.g., technology, demographics, the cross-disciplinary recognition of the problem [Spellman Reference Spellman2015]) that allowed the changes to take hold now, whereas reform movements with similar goals in the past had failed.
Amazingly, all of this has transpired in seven plus or minus two years. The early revelations that an assortment of high-profile studies failed to replicate, and then the later various mass replications – both those in which many different labs worked on many different studies (e.g., Nosek et al. Reference Nosek, Alter, Banks, Borsboom, Bowman, Breckler, Buck, Chambers, Chin, Christensen, Contestabile, Dafoe, Eich, Freese, Glennerster, Goroff, Green, Hesse, Humphreys, Ishiyama, Karlan, Kraut, Lupia, Mabry, Madon, Malhotra, Mayo-Wilson, McNutt, Miguel, Levy Paluck, Simonsohn, Soderberg, Spellman, Turitto, VandenBos, Vazire, Wagenmakers, Wilson and Yarkoni2015) and those in which many different labs worked on the same studies (e.g., Simons et al. Reference Simons, Holcombe and Spellman2014)–provided existence proofs that non-replicable published studies were widespread in psychology. The ground-breaking gem of a paper by Simmons et al. (Reference Simmons, Nelson and Simonsohn2011) gave our field a way to understand how this could have happened by scientists simply following the norms as they understood them, without any evil intent. But the norms were defective.
We believe that the quality of psychological science has been improving so fast and so broadly–mainly because of the replication crisis–that replications are likely to become rarer rather than routine. The massive multi-lab multi-experiment replication projects have served their purpose and will die out. What should happen, and indeed become mainstream, is the extension of original research should routinely include replication. The design of experiments and their execution are separable: Friendly laboratories should routinely exchange replication services in a shared effort to improve the transparency of their methods. Most replications should be friendly and adversarial replications should be collegial and regulated. How might this be done?
In one approach (Kahneman Reference Kahneman2014), after developing but before running a study, replicators send the original authors a complete plan of the procedure. Original authors have a set time to respond with comments and suggested modifications. Replicators choose whether and how to change the protocol but must explain how and why. These exchanges should be available for reviewers and readers to use when evaluating the claims of each side (e.g., whether it was a “faithful” replication).
In a second approach, the negotiation is refereed. For example, journals that take pre-registered replications may require careful vetting of the replicator's protocol before giving it a go-ahead stamp of “true direct replication.” But journal intercession is not necessary; authors and replicators could agree to mediation by knowledgeable individuals or teams of appointed researchers.
The two proposals above, however, are limited to checking the replicability of individual studies – individual “bricks in the wall” – in the same way current reforms directly affect only the integrity of individual studies (Spellman Reference Spellman2015). Science involves groups–groups of studies that connect together to define and develop (or destroy) theories (i.e., to create buildings or bridges from individual bricks) and communities of scientists who can work together, or in constructive competition (note: not opposition), to hone their shared ideas. Below we suggest two ways in which communities of scientists can engage in replication and theory development.
A third approach to replication is the daisy-chain approach. A group of laboratories that share a theoretical orientation that others question could get together, with each lab offering its favorite experiment for exact replication by the next lab in the chain – with all results published together, win or lose. Even if not all of the replications are successful, such an exercise would improve the quality of communications about research methods within a field, and improve the credibility of the field as a whole.
A fourth ambitious form of replication, called “paradigmatic replication,” has been implemented by Kathleen Vohs (Reference Vohs2018). Vohs recognizes that massive replication attempts of one study, particularly in an area where different researchers use different methods that change over time, is not a useful indicator of an evolving theory's robustness. In this procedure, the major proponents of a theory jointly resolve what the core elements of the theory are, and then decide what the best methods had been (or would be) to demonstrate/test its workings. A few diverse methods (e.g., varying independent or dependent measures) are devised, the protocols are pre-registered, and then multiple labs, both “believers” and “non-believers,” run the studies. Data are analyzed by a neutral third party.
Overall, we believe that the replication reform movement has already succeeded in valuable ways. Improvements of research methods are raising the credibility of results and reducing the need for replications by skeptics. We also believe that routine exchanges of “replication services” between cooperating laboratories (e.g., through StudySwap [https://osf.io/view/StudySwap/]) will further enhance the community's confidence in the clarity and completeness of methods, as well as in the stability of findings.