Hostname: page-component-89b8bd64d-sd5qd Total loading time: 0 Render date: 2026-05-10T14:15:11.029Z Has data issue: false hasContentIssue false

Is there evidence of publication biases in JDM research?

Published online by Cambridge University Press:  01 January 2023

Frank Renkewitz*
Affiliation:
Department of Psychology, University of Erfurt, Nordhäuser Strasse 63, D-99089, Erfurt, Phone: +49-(0)361/ 737 2223
Heather M. Fuchs
Affiliation:
Department of Psychology, University of Erfurt, Nordhäuser Strasse 63, D-99089, Erfurt, Phone: +49-(0)361/ 737 2223
Susann Fiedler
Affiliation:
Max Planck Institute for Research on Collective Goods
Rights & Permissions [Opens in a new window]

Abstract

It is a long known problem that the preferential publication of statistically significant results (publication bias) may lead to incorrect estimates of the true effects being investigated. Even though other research areas (e.g., medicine, biology) are aware of the problem, and have identified strong publication biases, researchers in judgment and decision making (JDM) largely ignore it. We reanalyzed two current meta-analyses in this area. Both showed evidence of publication biases that may have led to a substantial overestimation of the true effects they investigated. A review of additional JDM meta-analyses shows that most meta-analyses conducted no or insufficient analyses of publication bias. However, given our results and the rareness of non-significant effects in the literature, we suspect that biases occur quite often. These findings suggest that (a) conclusions based on meta-analyses without reported tests of publication bias should be interpreted with caution and (b) publication policies and standard research practices should be revised to overcome the problem.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2011] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Figure 0

Figure 1: Funnel plot with a mean effect size of r=.34 and no bias. Contours mark the conventional 5% and 10% levels of significance.

Figure 1

Figure 2: Funnel plot of the effect sizes in the primary studies summarized in the meta-analysis by Balliet and colleagues (2009). The correlational effect sizes were Fisher Z-transformed. The solid lines indicate the combined effect sizes of the published studies and the complete data set. The dashed lines provide the adjusted estimates for these samples of studies, which are based on Egger’s regression.

Figure 2

Table 1: Three indicators of publication bias in the meta-analyses of Balliet et al. (2009) and Dato-on & Dahlstrom (2003)

Figure 3

Figure 3: Funnel plot of the effect sizes in the published studies summarized by Balliet and colleagues (2009). The white dots are studies imputed by the trim-and-fill procedure (estimator R0). The dashed line indicates the adjusted estimate of the combined effect size resulting from the trim-and-fill procedure.

Figure 4

Figure 4: Funnel plots of the effect sizes in the primary studies according to game form (iterated vs. one-shot)

Figure 5

Figure 5: Funnel plot of the effect sizes in the primary studies summarized in the meta-analysis by Dato-on and Dahlstrom (2003). The adjusted estimate of the combined effect size is based on Egger’s regression.