Hostname: page-component-77f85d65b8-jkvpf Total loading time: 0 Render date: 2026-03-27T17:35:51.449Z Has data issue: false hasContentIssue false

Bayesian workflow for bias-adjustment model in meta-analysis

Published online by Cambridge University Press:  13 November 2025

Juyoung Jung
Affiliation:
Educational Measurement and Statistics, The University of Iowa , United States
Ariel M. Aloe*
Affiliation:
Educational Measurement and Statistics, The University of Iowa , United States
*
Corresponding author: Ariel M. Aloe; Email: ariel-aloe@uiowa.edu
Rights & Permissions [Opens in a new window]

Abstract

Bayesian hierarchical models offer a principled framework for adjusting for study-level bias in meta-analysis, but their complexity and sensitivity to prior specifications necessitate a systematic framework for robust application. This study demonstrates the application of a Bayesian workflow to this challenge, comparing a standard random-effects model to a bias-adjustment model across a real-world dataset and a targeted simulation study. The workflow revealed a high sensitivity of results to the prior on bias probability, showing that while the simpler random-effects model had superior predictive accuracy as measured by the widely applicable information criterion, the bias-adjustment model successfully propagated uncertainty by producing wider, more conservative credible intervals. The simulation confirmed the model’s ability to recover true parameters when priors were well-specified. These results establish the Bayesian workflow as a principled framework for diagnosing model sensitivities and ensuring the transparent application of complex bias-adjustment models in evidence synthesis.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Society for Research Synthesis Methodology
Figure 0

Figure 1 Prior predictive distributions of simulated effect sizes for random-effect and bias-adjustment models.

Figure 1

Table 1 Posterior summaries for random-effect and bias-adjustment models

Figure 2

Figure 2 Posterior predictive density overlay and test statistics for random-effect model.

Figure 3

Table 2 Model comparison using WAIC criteria between random-effects and bias-adjustment models

Figure 4

Figure 3 Overall effect size forest plot for random-effect and bias-adjustment models.

Figure 5

Table 3 Posterior summaries for bias-adjustment model across sensitivity analyses in simulation data

Figure 6

Figure 4 Overall effect size forest plot for bias-adjustment models with simulation data.

Figure 7

Figure A1 Posterior predictive density overlay and test statistics for bias-adjustment model ($K=16$).

Figure 8

Figure A2 Posterior predictive density overlay and test statistics for bias-adjustment model ($K=12$).

Figure 9

Figure A3 Posterior predictive density overlay and test statistics for bias-adjustment model ($K=9$).

Figure 10

Figure A4 Posterior predictive density overlay and test statistics for bias-adjustment model ($K=5$).

Figure 11

Figure A5 Study-specific effect size forest plot for random-effect and bias-adjustment models.

Supplementary material: File

Jung and Aloe supplementary material

Jung and Aloe supplementary material
Download Jung and Aloe supplementary material(File)
File 377.9 KB