Hostname: page-component-77f85d65b8-pztms Total loading time: 0 Render date: 2026-03-29T09:18:09.375Z Has data issue: false hasContentIssue false

What can we learn from 1,000 meta-analyses across 10 different disciplines?

Published online by Cambridge University Press:  02 October 2025

Weilun Wu
Affiliation:
Department of Economics and Finance and UCMeta, University of Canterbury, Christchurch, New Zealand
Jianhua Duan
Affiliation:
Statistics New Zealand , Christchurch, New Zealand
W. Robert Reed*
Affiliation:
Department of Economics and Finance and UCMeta, University of Canterbury, Christchurch, New Zealand
Elizabeth Tipton
Affiliation:
Statistics for Evidence-Based Policy and Practice Center, Northwestern University , Evanston, IL, USA
*
Corresponding author: W. Robert Reed; Email: bob.reed@canterbury.ac.nz
Rights & Permissions [Opens in a new window]

Abstract

This study analyzes 1,000 meta-analyses drawn from 10 disciplines—including medicine, psychology, education, biology, and economics—to document and compare methodological practices across fields. We find large differences in the size of meta-analyses, the number of effect sizes per study, and the types of effect sizes used. Disciplines also vary in their use of unpublished studies, the frequency and type of tests for publication bias, and whether they attempt to correct for it. Notably, many meta-analyses include multiple effect sizes from the same study, yet fail to account for statistical dependence in their analyses. We document the limited use of advanced methods—such as multilevel models and cluster-adjusted standard errors—that can accommodate dependent data structures. Correlations are frequently used as effect sizes in some disciplines, yet researchers often fail to address the methodological issues this introduces, including biased weighting and misleading tests for publication bias. We also find that meta-regression is underutilized, even when sample sizes are large enough to support it. This work serves as a resource for researchers conducting their first meta-analyses, as a benchmark for researchers designing simulation experiments, and as a reference for applied meta-analysts aiming to improve their methodological practices.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Society for Research Synthesis Methodology
Figure 0

Table 1 Number of studies and estimates in meta-analyses

Figure 1

Figure 1 Number of studies in a meta-analysis by discipline.Note: Figure 1 reports boxplots (without outliers) for each of the 10 disciplines with respect to the number of studies. Superimposed on the boxplots are the average number of studies per meta-analysis. The figure is arranged with the smallest median number of studies per meta-analysis at the top of the figure to the largest median number of studies at the bottom of the figure.

Figure 2

Figure 2 Number of estimates in a meta-analysis by discipline.Note: Figure 2 reports boxplots (without outliers) for each of the 10 disciplines with respect to the number of estimates. Superimposed on the boxplots are the average number of studies per meta-analysis. The figure is arranged with the smallest median number of estimates per meta-analysis at the top of the figure to the largest median number of estimates at the bottom of the figure.

Figure 3

Table 2 Number of estimates per study in meta-analyses

Figure 4

Figure 3 Number of estimates per study in a meta-analysis by discipline.Note: Figure 3 reports boxplots (without outliers) for each of the 10 disciplines with respect to the number of estimates per study. Superimposed on the boxplots are the average number of estimates per study. The figure is arranged with the smallest median number of estimates per study at the top of the figure to the largest median number of estimates per study at the bottom of the figure.

Figure 5

Table 3 Percent of meta-analyses including unpublished primary studies

Figure 6

Figure 4 Usage rates of different effect sizes.Note: Figure 4 reports aggregate usage rates of different effect sizes across all disciplines. “Ratio” includes odds ratios, hazard ratios, and risk ratios. “Mean Diff” includes Cohen’s d and Hedge’s g. “Prevalence” uses a frequency or count to measure an outcome. “Corr” stands for correlation and includes partial correlation coefficients.

Figure 7

Table 4 Prevalence of different effect sizes

Figure 8

Figure 5 Usage rates of different meta-analytic estimators.Note: Figure 5 reports aggregate usage rates of different meta-analytic estimators across all disciplines. RE, random effects; FE, fixed effects; MVM, multi-level model; OLS, ordinary least squares; SEM, structural equation modelling; Bayes, Bayesian estimation.

Figure 9

Table 5 Prevalence of different estimators

Figure 10

Table 6 Closer comparison of random effects and fixed effects

Figure 11

Table 7 Reporting of effect heterogeneity and median I-squared

Figure 12

Figure 6 I-squared by discipline.Note: Figure 6 reports boxplots (without outliers) for each of the 10 disciplines with respect to I-squared as a relative measure of effect heterogeneity. Superimposed on the boxplots is the mean I-squared value for that discipline. The figure is arranged with the smallest median I-squared value at the top of the figure to the largest median I-squared value at the bottom of the figure.

Figure 13

Table 8 Percent of meta-analyses testing and finding publication bias

Figure 14

Figure 7 Prevalence of different types of tests for publication bias.Note: Figure 7 reports aggregate usage rates of different types of publication bias tests. Funnel, Funnel plot; Eggers, Egger-type regression; TrimFill, Trim and Fill; Beggs, Begg and Mazumdar’s rank correlation test; FailSafe, Fail Safe N; Selection, publication bias uses a selection model; PUniCurv, either p-Uniform or p-Curve test for publication bias.

Figure 15

Table 9 Assessing publication bias: Funnel plots and other approaches

Figure 16

Table 10 Prevalence of methods to assess publication bias

Figure 17

Figure 8 Usage rates of different statistical packages.Note: Figure 8 reports aggregate usage rates of different statistical software packages.

Figure 18

Table 11 Prevalence of different statistical packages

Figure 19

Table 12 Use of meta-regression in meta-analyses

Figure 20

Table 13 Correlations, Fisher’s z, and alternative weights

Figure 21

Table 14 Correcting for publication bias

Figure 22

Table 15 Meta-analyses that account for dependencies

Figure 23

Table 16 Evaluation of disciplines against recommendations

Figure 24

Appendix A Percent of meta-analyses testing and finding publication bias: Comparison of experimental and observational meta-analyses

Figure 25

Appendix B Assessing the scope for meta-regression in studies that currently do not conduct a meta-regression or only use a single predictor

Figure 26

Appendix C Relationship between estimates/study and the percent of meta-analyses addressing dependency