Open Conference Systems, ICQQMEAS2013

Font Size: 
PUBLICATION BIAS IN META-ANALYSIS: CONFIDENCE INTERVALS FOR ROSENTHAL’S ‘FILE-DRAWER’ NR
Constantinos C. Frangos, Michail Tsagris, Christos C. Frangos

Last modified: 2015-09-24

Abstract


Meta-analysis refers to methods for combining results from different studies identifying patterns among study results, sources of disagreement among those results and other interesting relationships in the context of multiple studies. Publication bias refers to the fact that statistically significant results are more likely to be submitted and published thanwork with null or non-significant results. Combining published studies increases the possibility that the meta-analytic output is over optimistic – and biased by publication bias. Existing Techniques for detection of publication bias include: Funnel Plot; Begg‟s rank correlation test; Egger‟s linear regression test; Trim and Fill Method and Selection Models; and Rosenthal‟s File-drawer NR. The aim of the present study is to estimate confidence intervals Rosenthal‟s NR. The reasons are that this metho is the second most frequent method after funnel plots and it hasn‟t been explored yet in the literature. Rosenthal‟s NRanswers the question: „How many new studies averaging null result are required to bring the overall treatment effect to non-significant?‟ Its rationale sums up as follows: when this number is very high, this indicates low possibility of publication bias; when it is low, it indicates presence of publication bias. The formula is N Z n where Zi are the normal z scores corresponding to the p-values observed for each study; Zα corresponds to the alpha percentage point of the standard normal distribution; and n: number of studies. The existing rule of thumb is applied: when NR > 5n+10, there is small likelihood of publication bias (Rosenthal, 1979). Rosenthal‟s NR has the following shortcomings which motivate the need for confidence intervals: 1. Increased variability: if we remove one study (the first for example) NR falls significantly. 2. There is no hypothesis testing for NR 3. This existing rule of thumb is considered highly conservative The present paper suggests four methods to compute confidence intervals: a Naïve method by substitution; a Normal approximation method; Bootstrap; and Jackknife. Simulation experiments, under different number of studies and various levels of Rosenthal‟s NR, are presented. Estimates computed are probability coverage, bias, and width

Full Text: PDF