ترغب بنشر مسار تعليمي؟ اضغط هنا

Towards More Flexible False Positive Control in Phase III Randomized Clinical Trials

116   0   0.0 ( 0 )
 نشر من قبل Changyu Shen
 تاريخ النشر 2019
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Phase III randomized clinical trials play a monumentally critical role in the evaluation of new medical products. Because of the intrinsic nature of uncertainty embedded in our capability in assessing the efficacy of a medical product, interpretation of trial results relies on statistical principles to control the error of false positives below desirable level. The well-established statistical hypothesis testing procedure suffers from two major limitations, namely, the lack of flexibility in the thresholds to claim success and the lack of capability of controlling the total number of false positives that could be yielded by the large volume of trials. We propose two general theoretical frameworks based on the conventional frequentist paradigm and Bayesian perspectives, which offer realistic, flexible and effective solutions to these limitations. Our methods are based on the distribution of the effect sizes of the population of trials of interest. The estimation of this distribution is practically feasible as clinicaltrials.gov provides a centralized data repository with unbiased coverage of clinical trials. We provide a detailed development of the two frameworks with numerical results obtained for industry sponsored Phase III randomized clinical trials.



قيم البحث

اقرأ أيضاً

85 - Ting Ye , Jun Shao , Yanyao Yi 2020
In randomized clinical trials, adjustments for baseline covariates at both design and analysis stages are highly encouraged by regulatory agencies. A recent trend is to use a model-assisted approach for covariate adjustment to gain credibility and ef ficiency while producing asymptotically valid inference even when the model is incorrect. In this article we present three considerations for better practice when model-assisted inference is applied to adjust for covariates under simple or covariate-adaptive randomized trials: (1) guaranteed efficiency gain: a model-assisted method should often gain but never hurt efficiency; (2) wide applicability: a valid procedure should be applicable, and preferably universally applicable, to all commonly used randomization schemes; (3) robust standard error: variance estimation should be robust to model misspecification and heteroscedasticity. To achieve these, we recommend a model-assisted estimator under an analysis of heterogeneous covariance working model including all covariates utilized in randomization. Our conclusions are based on an asymptotic theory that provides a clear picture of how covariate-adaptive randomization and regression adjustment alter statistical efficiency. Our theory is more general than the existing ones in terms of studying arbitrary functions of response means (including linear contrasts, ratios, and odds ratios), multiple arms, guaranteed efficiency gain, optimality, and universal applicability.
101 - Xiaoru Wu , Zhiliang Ying 2011
Covariate adjustment is an important tool in the analysis of randomized clinical trials and observational studies. It can be used to increase efficiency and thus power, and to reduce possible bias. While most statistical tests in randomized clinical trials are nonparametric in nature, approaches for covariate adjustment typically rely on specific regression models, such as the linear model for a continuous outcome, the logistic regression model for a dichotomous outcome and the Cox model for survival time. Several recent efforts have focused on model-free covariate adjustment. This paper makes use of the empirical likelihood method and proposes a nonparametric approach to covariate adjustment. A major advantage of the new approach is that it automatically utilizes covariate information in an optimal way without fitting nonparametric regression. The usual asymptotic properties, including the Wilks-type result of convergence to a chi-square distribution for the empirical likelihood ratio based test, and asymptotic normality for the corresponding maximum empirical likelihood estimator, are established. It is also shown that the resulting test is asymptotically most powerful and that the estimator for the treatment effect achieves the semiparametric efficiency bound. The new method is applied to the Global Use of Strategies to Open Occluded Coronary Arteries (GUSTO)-I trial. Extensive simulations are conducted, validating the theoretical findings.
193 - Suyu Liu , Ying Yuan 2013
Interval designs are a class of phase I trial designs for which the decision of dose assignment is determined by comparing the observed toxicity rate at the current dose with a prespecified (toxicity tolerance) interval. If the observed toxicity rate is located within the interval, we retain the current dose; if the observed toxicity rate is greater than the upper boundary of the interval, we deescalate the dose; and if the observed toxicity rate is smaller than the lower boundary of the interval, we escalate the dose. The most critical issue for the interval design is choosing an appropriate interval so that the design has good operating characteristics. By casting dose finding as a Bayesian decision-making problem, we propose new flexible methods to select the interval boundaries so as to minimize the probability of inappropriate dose assignment for patients. We show, both theoretically and numerically, that the resulting optimal interval designs not only have desirable finite- and large-sample properties, but also are particularly easy to implement in practice. Compared to existing designs, the proposed (local) optimal design has comparable average performance, but a lower risk of yielding a poorly performing clinical trial.
287 - Li Yang , Wei Ma , Yichen Qin 2020
Concerns have been expressed over the validity of statistical inference under covariate-adaptive randomization despite the extensive use in clinical trials. In the literature, the inferential properties under covariate-adaptive randomization have bee n mainly studied for continuous responses; in particular, it is well known that the usual two sample t-test for treatment effect is typically conservative, in the sense that the actual test size is smaller than the nominal level. This phenomenon of invalid tests has also been found for generalized linear models without adjusting for the covariates and are sometimes more worrisome due to inflated Type I error. The purpose of this study is to examine the unadjusted test for treatment effect under generalized linear models and covariate-adaptive randomization. For a large class of covariate-adaptive randomization methods, we obtain the asymptotic distribution of the test statistic under the null hypothesis and derive the conditions under which the test is conservative, valid, or anti-conservative. Several commonly used generalized linear models, such as logistic regression and Poisson regression, are discussed in detail. An adjustment method is also proposed to achieve a valid size based on the asymptotic results. Numerical studies confirm the theoretical findings and demonstrate the effectiveness of the proposed adjustment method.
Simulation offers a simple and flexible way to estimate the power of a clinical trial when analytic formulae are not available. The computational burden of using simulation has, however, restricted its application to only the simplest of sample size determination problems, minimising a single parameter (the overall sample size) subject to power being above a target level. We describe a general framework for solving simulation-based sample size determination problems with several design parameters over which to optimise and several conflicting criteria to be minimised. The method is based on an established global optimisation algorithm widely used in the design and analysis of computer experiments, using a non-parametric regression model as an approximation of the true underlying power function. The method is flexible, can be used for almost any problem for which power can be estimated using simulation, and can be implemented using existing statistical software packages. We illustrate its application to three increasingly complicated sample size determination problems involving complex clustering structures, co-primary endpoints, and small sample considerations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا