Hostname: page-component-6766d58669-88psn Total loading time: 0 Render date: 2026-05-14T11:48:28.899Z Has data issue: false hasContentIssue false

Meta-regression with categorical moderators and dependent effect sizes: A simulation study

Published online by Cambridge University Press:  14 May 2026

Belén Fernández-Castilla*
Affiliation:
Department of Methodology of Health and Behavioral Sciences, Faculty of Psychology, Universidad Nacional de Educación a Distancia, Spain
José Antonio López-López
Affiliation:
Basic Psychology & Methodology, University of Murcia, Spain IMIB-Arrixaca, Murcian Institute of Biomedical Research, Spain
María Rubio-Aparicio
Affiliation:
Basic Psychology & Methodology, University of Murcia, Spain
*
Corresponding author: Belén Fernández-Castilla; Email: bfcastilla@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

Categorical moderators are often found in meta-analysis and examined using meta-regression models. When multiple effect sizes are present within studies, several methods can be used for meta-regression: multivariate models, three-level models, correlated-effects models with robust variance estimation (RVE), three-level models with RVE, and correlated-effects models with RVE and cluster wild bootstrapping (CWB). This study aimed to compare the performance of these methods through a simulation study. Cohen’s d values were generated under a multivariate model, incorporating a binary variable that could represent either study-level or effect size-level characteristics. When the moderator referred to an effect size-level characteristic, its effect was allowed to vary across studies. Factors manipulated in the simulation included number of studies, number of outcomes per study, and the distribution of effect sizes across the categories of the moderator variable, ranging from balanced to highly unbalanced. The methods were applied and compared in terms of bias, Type I error, and power. The results showed that all methods exhibited lower power to detect effects when the moderator variable referred to study-level characteristics and the effect size distribution was very unbalanced. Methods based on RVE (correlated-effects with RVE or with RVE and CWB, and three-level models with RVE) effectively controlled Type I error rates but tended to be overconservative. In contrast, three-level models achieved higher power but at the cost of inflated Type I error. The best balance between Type I error control and power was observed when using a combination of three-level models and RVE.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of The Society for Research Synthesis Methodology
Figure 0

Table 1 Results from meta-regressions carried out with different methods

Figure 1

Table 2 Mean Type I error rate disaggregated by method and simulation conditions

Figure 2

Figure 1 Type I error distribution across methods, disaggregated by the degree of imbalance in the distribution of effect sizes across the moderator categories.

Figure 3

Table 3 Power disaggregated by the simulated conditions and method

Figure 4

Figure 2 Statistical power across methods, disaggregated by the magnitude of differences between categories and the degree of imbalance in effect size distributions across moderator categories.

Figure 5

Table 4 Average Type I error rates disaggregated by method and simulated conditions

Figure 6

Table 5 Mean power segmented by method and simulation condition

Supplementary material: File

Fernández-Castilla et al. supplementary material

Fernández-Castilla et al. supplementary material
Download Fernández-Castilla et al. supplementary material(File)
File 3.3 MB