## 1. Introduction

Meta-analysis is a form of research synthesis that allows researchers to quantitatively integrate the results from a set of studies on the same topic (Borenstein, Hedges, Higgins & Rothstein, 2009; Cooper, Hedges & Valentine, 2009). Since the outcomes from the individual studies are often expressed in different measurement units, their results are typically converted into a common metric through a standardized effect size index (such as the standardized mean difference). The main objectives in a meta-analysis are to obtain an overall effect size estimate, to assess the heterogeneity among the individual effect size estimates, and to search for moderators that can account for (at least) part of that heterogeneity (Hedges & Olkin, 1985; Sánchez-Meca & Marín-Martínez, 2010).

The results or effect sizes of the individual studies in a meta-analysis usually exhibit some heterogeneity (e.g., Sidik & Jonkman, 2005b; Thompson & Higgins, 2002). This means that, although a set of studies analysing the same phenomenon (e.g., effectiveness of psychological treatments and interventions on a given disorder) are selected, their results are likely to differ to some extent. For that reason, moderator analyses typically constitute a crucial element of a meta-analysis (Lipsey, 2009). In a moderator analysis, the goal is to test the influence of one or more study characteristics (e.g., type and duration of the intervention, severity of the disorder in the sample patients) on the outcome variable (e.g., efficacy of the intervention, assessed through the comparison between a treatment and a control group). Such analyses can be conducted by fitting linear models to the data where the moderators constitute the predictor variables and the effect sizes are employed as the criterion variable (Borenstein *et al*., 2009). This leads to so-called meta-regression models (Thompson & Higgins, 2002). In a meta-regression model, both continuous and categorical moderators can be included.

When carrying out a meta-analysis, some statistical model must be assumed for the effect size distribution, and the model choice will have an influence on the validity and generalizability of the results from the meta-analysis. Two kinds of statistical models have been employed for the majority of meta-analytic reviews conducted so far, namely the fixed-effects and random-effects models (Hedges & Vevea, 1998; Schmidt, Oh & Hayes, 2009). Nowadays, most researchers agree that the model choice should be made based on the generalizability intended for the results (National Research Council, 1992). Only random-effects models, which include an additional variance component to model the between-studies heterogeneity, allow for generalization to studies different to the ones included in the meta-analysis, which is usually the goal when carrying out such a review. Thus, random-effects models are a suitable option for most meta-analyses (Hedges & Vevea, 1998; Raudenbush, 1994, 2009).

Under a random-effects model, it is assumed that the study outcomes (e.g., treatment efficacy) will fluctuate as a consequence of two sources of variation: the sampling of the participants for each study; and the differential characteristics of the studies (e.g., different conditions of the sample, treatment application, methodology, or context in each individual study). The magnitude of the latter can be analysed through the estimation of the heterogeneity (or between-studies) variance, τ^{2}, which represents the excess variation among the effects over that expected from sampling error alone (Thompson & Sharp, 1999). In contrast to the sampling variances from each effect size, which quantify the random sampling error, τ^{2} denotes systematic differences due to the influence of characteristics from the individual studies. The identification of some of these characteristics (or moderators) is the main objective of the moderator analyses. Since the moderators are usually included as fixed effects in the model, the addition of a random effect (the effect sizes in the studies) to model the heterogeneity among the studies leads to mixed-effects meta-regression models.

There are several parameters of interest in a meta-regression model. One of these is the model predictive power, denoted by Ρ^{2} (Ρ denotes the capital Greek letter ‘rho’), which can be defined as the proportion of variance among the effect sizes that can be accounted for by the predictors included in the model. Note that only the variance due to differences among the studies, quantified by τ^{2}, can be explained by the predictors usually included in a mixed-effects meta-regression model. An estimate of the Ρ^{2} parameter is usually denoted as an *R*^{2} value. The interpretation of *R*^{2} is identical in ordinary regression and in meta-regression models, in terms of a percentage or proportion of the variability in the outcomes associated with the predictor(s).

When regression models are fitted using ordinary least squares techniques, the *R*^{2} index is computed as the quotient between the sum of squares due to the regression and the total sum of squares, that is, (e.g., Pedhazur & Schmelkin, 1991). However, this strategy is not suitable for meta-regression models because part of the total variability, more specifically the sampling error of an observed effect size given the population effect size in that study, cannot by definition be explained by the moderators included in the model (Aloe, Becker & Pigott, 2010; Konstantopoulos & Hedges, 2009; Rodriguez & Maeda, 2006).1 Thus, a different method is typically proposed for obtaining an *R*^{2} index in meta-regression models (Raudenbush, 1994), where the total variability is an estimate of the between-studies variance, τ^{2}, and the variability explained by the predictors in the model is estimated as a part of τ^{2} (see equation (3)) . This method will be presented, explained, and illustrated in this paper.

In a meta-regression model, an adequate estimate of the magnitude of its predictive power via the *R*^{2} index is an essential complement of the statistical significance of the model. The *R*^{2} index informs us about the practical significance or the degree of influence of a set of moderators in the heterogeneity of the effect sizes in a meta-analysis (e.g., explaining around 20% or 30% of the heterogeneity). However, as far as we know, no studies have yet evaluated in a systematic manner the performance of the *R*^{2} index in the conditions of a meta-regression model. Therefore, the purpose of the present study was to assess the performance of the method proposed by Raudenbush to compute an *R*^{2} index in meta-analysis, by conducting a Monte Carlo simulation with different conditions usually found in the real meta-analyses.

The outline of the present paper is as follows. First, mixed-effects meta-regression models are briefly sketched. Second, various alternatives for computing an *R*^{2} index according to the proposal of Raudenbush (1994) for meta-analysis are considered. After presenting the methods, results from previous simulation studies that pursued part of the objectives of our study are summarized. The performance of the alternative methods here considered is then illustrated by applying them to an example. Next, a simulation study comparing the various estimators is presented and the results obtained are detailed. Finally, the results are discussed and some conclusions provided, where the degree of accuracy of the different methods for the computation of an *R*^{2} index as a measure of the explanatory power of a predictor is assessed as a function of the specific conditions in a meta-analysis (e.g., number of studies, sample size distribution of the studies, effect size distribution, and the true percentage of variance accounted for by the predictor).