ترغب بنشر مسار تعليمي؟ اضغط هنا

Elastic Priors to Dynamically Borrow Information from Historical Data in Clinical Trials

83   0   0.0 ( 0 )
 نشر من قبل Ying Yuan
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Use of historical data and real-world evidence holds great potential to improve the efficiency of clinical trials. One major challenge is how to effectively borrow information from historical data while maintaining a reasonable type I error. We propose the elastic prior approach to address this challenge and achieve dynamic information borrowing. Unlike existing approaches, this method proactively controls the behavior of dynamic information borrowing and type I errors by incorporating a well-known concept of clinically meaningful difference through an elastic function, defined as a monotonic function of a congruence measure between historical data and trial data. The elastic function is constructed to satisfy a set of information-borrowing constraints prespecified by researchers or regulatory agencies, such that the prior will borrow information when historical and trial data are congruent, but refrain from information borrowing when historical and trial data are incongruent. In doing so, the elastic prior improves power and reduces the risk of data dredging and bias. The elastic prior is information borrowing consistent, i.e. asymptotically controls type I and II errors at the nominal values when historical data and trial data are not congruent, a unique characteristics of the elastic prior approach. Our simulation study that evaluates the finite sample characteristic confirms that, compared to existing methods, the elastic prior has better type I error control and yields competitive or higher power.



قيم البحث

اقرأ أيضاً

In current clinical trial development, historical information is receiving more attention as providing value beyond sample size calculation. Meta-analytic-predictive (MAP) priors and robust MAP priors have been proposed for prospectively borrowing hi storical data on a single endpoint. To simultaneously synthesize control information from multiple endpoints in confirmatory clinical trials, we propose to approximate posterior probabilities from a Bayesian hierarchical model and estimate critical values by deep learning to construct pre-specified decision functions before the trial conduct. Simulation studies and a case study demonstrate that our method additionally preserves power, and has a satisfactory performance under prior-data conflict.
Response-adaptive randomization (RAR) is part of a wider class of data-dependent sampling algorithms, for which clinical trials are used as a motivating application. In that context, patient allocation to treatments is determined by randomization pro babilities that are altered based on the accrued response data in order to achieve experimental goals. RAR has received abundant theoretical attention from the biostatistical literature since the 1930s and has been the subject of numerous debates. In the last decade, it has received renewed consideration from the applied and methodological communities, driven by successful practical examples and its widespread use in machine learning. Papers on the subject can give different views on its usefulness, and reconciling these may be difficult. This work aims to address this gap by providing a unified, broad and up-to-date review of methodological and practical issues to consider when debating the use of RAR in clinical trials.
A central goal in designing clinical trials is to find the test that maximizes power (or equivalently minimizes required sample size) for finding a true research hypothesis subject to the constraint of type I error. When there is more than one test, such as in clinical trials with multiple endpoints, the issues of optimal design and optimal policies become more complex. In this paper we address the question of how such optimal tests should be defined and how they can be found. We review different notions of power and how they relate to study goals, and also consider the requirements of type I error control and the nature of the policies. This leads us to formulate the optimal policy problem as an explicit optimization problem with objective and constraints which describe its specific desiderata. We describe a complete solution for deriving optimal policies for two hypotheses, which have desired monotonicity properties, and are computationally simple. For some of the optimization formulations this yields optimal policies that are identical to existing policies, such as Hommels procedure or the procedure of Bittman et al. (2009), while for others it yields completely novel and more powerful policies than existing ones. We demonstrate the nature of our novel policies and their improved power extensively in simulation and on the APEX study (Cohen et al., 2016).
107 - Xiaoru Wu , Zhiliang Ying 2011
Covariate adjustment is an important tool in the analysis of randomized clinical trials and observational studies. It can be used to increase efficiency and thus power, and to reduce possible bias. While most statistical tests in randomized clinical trials are nonparametric in nature, approaches for covariate adjustment typically rely on specific regression models, such as the linear model for a continuous outcome, the logistic regression model for a dichotomous outcome and the Cox model for survival time. Several recent efforts have focused on model-free covariate adjustment. This paper makes use of the empirical likelihood method and proposes a nonparametric approach to covariate adjustment. A major advantage of the new approach is that it automatically utilizes covariate information in an optimal way without fitting nonparametric regression. The usual asymptotic properties, including the Wilks-type result of convergence to a chi-square distribution for the empirical likelihood ratio based test, and asymptotic normality for the corresponding maximum empirical likelihood estimator, are established. It is also shown that the resulting test is asymptotically most powerful and that the estimator for the treatment effect achieves the semiparametric efficiency bound. The new method is applied to the Global Use of Strategies to Open Occluded Coronary Arteries (GUSTO)-I trial. Extensive simulations are conducted, validating the theoretical findings.
The ICH E9 addendum introduces the term intercurrent event to refer to events that happen after randomisation and that can either preclude observation of the outcome of interest or affect its interpretation. It proposes five strategies for handling i ntercurrent events to form an estimand but does not suggest statistical methods for estimation. In this paper we focus on the hypothetical strategy, where the treatment effect is defined under the hypothetical scenario in which the intercurrent event is prevented. For its estimation, we consider causal inference and missing data methods. We establish that certain causal inference estimators are identical to certain missing data estimators. These links may help those familiar with one set of methods but not the other. Moreover, using potential outcome notation allows us to state more clearly the assumptions on which missing data methods rely to estimate hypothetical estimands. This helps to indicate whether estimating a hypothetical estimand is reasonable, and what data should be used in the analysis. We show that hypothetical estimands can be estimated by exploiting data after intercurrent event occurrence, which is typically not used. We also present Monte Carlo simulations that illustrate the implementation and performance of the methods in different settings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا