No Arabic abstract
In randomized clinical trials, adjustments for baseline covariates at both design and analysis stages are highly encouraged by regulatory agencies. A recent trend is to use a model-assisted approach for covariate adjustment to gain credibility and efficiency while producing asymptotically valid inference even when the model is incorrect. In this article we present three considerations for better practice when model-assisted inference is applied to adjust for covariates under simple or covariate-adaptive randomized trials: (1) guaranteed efficiency gain: a model-assisted method should often gain but never hurt efficiency; (2) wide applicability: a valid procedure should be applicable, and preferably universally applicable, to all commonly used randomization schemes; (3) robust standard error: variance estimation should be robust to model misspecification and heteroscedasticity. To achieve these, we recommend a model-assisted estimator under an analysis of heterogeneous covariance working model including all covariates utilized in randomization. Our conclusions are based on an asymptotic theory that provides a clear picture of how covariate-adaptive randomization and regression adjustment alter statistical efficiency. Our theory is more general than the existing ones in terms of studying arbitrary functions of response means (including linear contrasts, ratios, and odds ratios), multiple arms, guaranteed efficiency gain, optimality, and universal applicability.
Covariate adjustment is an important tool in the analysis of randomized clinical trials and observational studies. It can be used to increase efficiency and thus power, and to reduce possible bias. While most statistical tests in randomized clinical trials are nonparametric in nature, approaches for covariate adjustment typically rely on specific regression models, such as the linear model for a continuous outcome, the logistic regression model for a dichotomous outcome and the Cox model for survival time. Several recent efforts have focused on model-free covariate adjustment. This paper makes use of the empirical likelihood method and proposes a nonparametric approach to covariate adjustment. A major advantage of the new approach is that it automatically utilizes covariate information in an optimal way without fitting nonparametric regression. The usual asymptotic properties, including the Wilks-type result of convergence to a chi-square distribution for the empirical likelihood ratio based test, and asymptotic normality for the corresponding maximum empirical likelihood estimator, are established. It is also shown that the resulting test is asymptotically most powerful and that the estimator for the treatment effect achieves the semiparametric efficiency bound. The new method is applied to the Global Use of Strategies to Open Occluded Coronary Arteries (GUSTO)-I trial. Extensive simulations are conducted, validating the theoretical findings.
Concerns have been expressed over the validity of statistical inference under covariate-adaptive randomization despite the extensive use in clinical trials. In the literature, the inferential properties under covariate-adaptive randomization have been mainly studied for continuous responses; in particular, it is well known that the usual two sample t-test for treatment effect is typically conservative, in the sense that the actual test size is smaller than the nominal level. This phenomenon of invalid tests has also been found for generalized linear models without adjusting for the covariates and are sometimes more worrisome due to inflated Type I error. The purpose of this study is to examine the unadjusted test for treatment effect under generalized linear models and covariate-adaptive randomization. For a large class of covariate-adaptive randomization methods, we obtain the asymptotic distribution of the test statistic under the null hypothesis and derive the conditions under which the test is conservative, valid, or anti-conservative. Several commonly used generalized linear models, such as logistic regression and Poisson regression, are discussed in detail. An adjustment method is also proposed to achieve a valid size based on the asymptotic results. Numerical studies confirm the theoretical findings and demonstrate the effectiveness of the proposed adjustment method.
Phase III randomized clinical trials play a monumentally critical role in the evaluation of new medical products. Because of the intrinsic nature of uncertainty embedded in our capability in assessing the efficacy of a medical product, interpretation of trial results relies on statistical principles to control the error of false positives below desirable level. The well-established statistical hypothesis testing procedure suffers from two major limitations, namely, the lack of flexibility in the thresholds to claim success and the lack of capability of controlling the total number of false positives that could be yielded by the large volume of trials. We propose two general theoretical frameworks based on the conventional frequentist paradigm and Bayesian perspectives, which offer realistic, flexible and effective solutions to these limitations. Our methods are based on the distribution of the effect sizes of the population of trials of interest. The estimation of this distribution is practically feasible as clinicaltrials.gov provides a centralized data repository with unbiased coverage of clinical trials. We provide a detailed development of the two frameworks with numerical results obtained for industry sponsored Phase III randomized clinical trials.
We present a general framework for using existing data to estimate the efficiency gain from using a covariate-adjusted estimator of a marginal treatment effect in a future randomized trial. We describe conditions under which it is possible to define a mapping from the distribution that generated the existing external data to the relative efficiency of a covariate-adjusted estimator compared to an unadjusted estimator. Under conditions, these relative efficiencies approximate the ratio of sample size needed to achieve a desired power. We consider two situations where the outcome is either fully or partially observed and several treatment effect estimands that are of particular interest in most trials. For each such estimand, we develop a semiparametrically efficient estimator of the relative efficiency that allows for the application of flexible statistical learning tools to estimate the nuisance functions and an analytic form of a corresponding Wald-type confidence interval. We also propose a double bootstrap scheme for constructing confidence intervals. We demonstrate the performance of the proposed methods through simulation studies and apply these methods to data to estimate the relative efficiency of using covariate adjustment in Covid-19 therapeutic trials.
Detection of interactions between treatment effects and patient descriptors in clinical trials is critical for optimizing the drug development process. The increasing volume of data accumulated in clinical trials provides a unique opportunity to discover new biomarkers and further the goal of personalized medicine, but it also requires innovative robust biomarker detection methods capable of detecting non-linear, and sometimes weak, signals. We propose a set of novel univariate statistical tests, based on the theory of random walks, which are able to capture non-linear and non-monotonic covariate-treatment interactions. We also propose a novel combined test, which leverages the power of all of our proposed univariate tests into a single general-case tool. We present results for both synthetic trials as well as real-world clinical trials, where we compare our method with state-of-the-art techniques and demonstrate the utility and robustness of our approach.