ترغب بنشر مسار تعليمي؟ اضغط هنا

Testing for publication bias in meta-analysis under Copas selection model

75   0   0.0 ( 0 )
 نشر من قبل Rui Duan
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In meta-analyses, publication bias is a well-known, important and challenging issue because the validity of the results from a meta-analysis is threatened if the sample of studies retrieved for review is biased. One popular method to deal with publication bias is the Copas selection model, which provides a flexible sensitivity analysis for correcting the estimates with considerable insight into the data suppression mechanism. However, rigorous testing procedures under the Copas selection model to detect bias are lacking. To fill this gap, we develop a score-based test for detecting publication bias under the Copas selection model. We reveal that the behavior of the standard score test statistic is irregular because the parameters of the Copas selection model disappear under the null hypothesis, leading to an identifiability problem. We propose a novel test statistic and derive its limiting distribution. A bootstrap procedure is provided to obtain the p-value of the test for practical applications. We conduct extensive Monte Carlo simulations to evaluate the performance of the proposed test and apply the method to several existing meta-analyses.



قيم البحث

اقرأ أيضاً

The validity of conclusions from meta-analysis is potentially threatened by publication bias. Most existing procedures for correcting publication bias assume normality of the study-specific effects that account for between-study heterogeneity. Howeve r, this assumption may not be valid, and the performance of these bias correction procedures can be highly sensitive to departures from normality. Further, there exist few measures to quantify the magnitude of publication bias based on selection models. In this paper, we address both of these issues. First, we explore the use of heavy-tailed distributions for the study-specific effects within a Bayesian hierarchical framework. The deviance information criterion (DIC) is used to determine the appropriate distribution to use for conducting the final analysis. Second, we develop a new measure to quantify the magnitude of publication bias based on Hellinger distance. Our measure is easy to interpret and takes advantage of the estimation uncertainty afforded naturally by the posterior distribution. We illustrate our proposed approach through simulation studies and meta-analyses on lung cancer and antidepressants. To assess the prevalence of publication bias, we apply our method to 1500 meta-analyses of dichotomous outcomes in the Cochrane Database of Systematic Reviews. Our methods are implemented in the publicly available R package RobustBayesianCopas.
Small study effects occur when smaller studies show different, often larger, treatment effects than large ones, which may threaten the validity of systematic reviews and meta-analyses. The most well-known reasons for small study effects include publi cation bias, outcome reporting bias and clinical heterogeneity. Methods to account for small study effects in univariate meta-analysis have been extensively studied. However, detecting small study effects in a multivariate meta-analysis setting remains an untouched research area. One of the complications is that different types of selection processes can be involved in the reporting of multivariate outcomes. For example, some studies may be completely unpublished while others may selectively report multiple outcomes. In this paper, we propose a score test as an overall test of small study effects in multivariate meta-analysis. Two detailed case studies are given to demonstrate the advantage of the proposed test over various naive applications of univariate tests in practice. Through simulation studies, the proposed test is found to retain nominal Type I error with considerable power in moderate sample size settings. Finally, we also evaluate the concordance between the proposed test with the naive application of univariate tests by evaluating 44 systematic reviews with multiple outcomes from the Cochrane Database.
In a network meta-analysis, some of the collected studies may deviate markedly from the others, for example having very unusual effect sizes. These deviating studies can be regarded as outlying with respect to the rest of the network and can be influ ential on the pooled results. Thus, it could be inappropriate to synthesize those studies without further investigation. In this paper, we propose two Bayesian methods to detect outliers in a network meta-analysis via: (a) a mean-shifted outlier model and (b), posterior predictive p-values constructed from ad-hoc discrepancy measures. The former method uses Bayes factors to formally test each study against outliers while the latter provides a score of outlyingness for each study in the network, which allows to numerically quantify the uncertainty associated with being outlier. Furthermore, we present a simple method based on informative priors as part of the network meta-analysis model to down-weight the detected outliers. We conduct extensive simulations to evaluate the effectiveness of the proposed methodology while comparing it to some alternative, available outlier diagnostic tools. Two real networks of interventions are then used to demonstrate our methods in practice.
As the most important tool to provide high-level evidence-based medicine, researchers can statistically summarize and combine data from multiple studies by conducting meta-analysis. In meta-analysis, mean differences are frequently used effect size m easurements to deal with continuous data, such as the Cohens d statistic and Hedges g statistic values. To calculate the mean difference based effect sizes, the sample mean and standard deviation are two essential summary measures. However, many of the clinical reports tend not to directly record the sample mean and standard deviation. Instead, the sample size, median, minimum and maximum values and/or the first and third quartiles are reported. As a result, researchers have to transform the reported information to the sample mean and standard deviation for further compute the effect size. Since most of the popular transformation methods were developed upon the normality assumption of the underlying data, it is necessary to perform a pre-test before transforming the summary statistics. In this article, we had introduced test statistics for three popular scenarios in meta-analysis. We suggests medical researchers to perform a normality test of the selected studies before using them to conduct further analysis. Moreover, we applied three different case studies to demonstrate the usage of the newly proposed test statistics. The real data case studies indicate that the new test statistics are easy to apply in practice and by following the recommended path to conduct the meta-analysis, researchers can obtain more reliable conclusions.
According to Davey et al. (2011) with a total of 22,453 meta-analyses from the January 2008 Issue of the Cochrane Database of Systematic Reviews, the median number of studies included in each meta-analysis is only three. In other words, about a half or more of meta-analyses conducted in the literature include only two or three studies. While the common-effect model (also referred to as the fixed-effect model) may lead to misleading results when the heterogeneity among studies is large, the conclusions based on the random-effects model may also be unreliable when the number of studies is small. Alternatively, the fixed-effects model avoids the restrictive assumption in the common-effect model and the need to estimate the between-study variance in the random-effects model. We note, however, that the fixed-effects model is under appreciated and rarely used in practice until recently. In this paper, we compare all three models and demonstrate the usefulness of the fixed-effects model when the number of studies is small. In addition, we propose a new estimator for the unweighted average effect in the fixed-effects model. Simulations and real examples are also used to illustrate the benefits of the fixed-effects model and the new estimator.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا