ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Historical Borrowing Framework to Prospectively and Simultaneously Synthesize Control Information in Confirmatory Clinical Trials with Multiple Endpoints

91   0   0.0 ( 0 )
 نشر من قبل Tianyu Zhan
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In current clinical trial development, historical information is receiving more attention as providing value beyond sample size calculation. Meta-analytic-predictive (MAP) priors and robust MAP priors have been proposed for prospectively borrowing historical data on a single endpoint. To simultaneously synthesize control information from multiple endpoints in confirmatory clinical trials, we propose to approximate posterior probabilities from a Bayesian hierarchical model and estimate critical values by deep learning to construct pre-specified decision functions before the trial conduct. Simulation studies and a case study demonstrate that our method additionally preserves power, and has a satisfactory performance under prior-data conflict.



قيم البحث

اقرأ أيضاً

82 - Liyun Jiang , Lei Nie , Ying Yuan 2020
Use of historical data and real-world evidence holds great potential to improve the efficiency of clinical trials. One major challenge is how to effectively borrow information from historical data while maintaining a reasonable type I error. We propo se the elastic prior approach to address this challenge and achieve dynamic information borrowing. Unlike existing approaches, this method proactively controls the behavior of dynamic information borrowing and type I errors by incorporating a well-known concept of clinically meaningful difference through an elastic function, defined as a monotonic function of a congruence measure between historical data and trial data. The elastic function is constructed to satisfy a set of information-borrowing constraints prespecified by researchers or regulatory agencies, such that the prior will borrow information when historical and trial data are congruent, but refrain from information borrowing when historical and trial data are incongruent. In doing so, the elastic prior improves power and reduces the risk of data dredging and bias. The elastic prior is information borrowing consistent, i.e. asymptotically controls type I and II errors at the nominal values when historical data and trial data are not congruent, a unique characteristics of the elastic prior approach. Our simulation study that evaluates the finite sample characteristic confirms that, compared to existing methods, the elastic prior has better type I error control and yields competitive or higher power.
A central goal in designing clinical trials is to find the test that maximizes power (or equivalently minimizes required sample size) for finding a true research hypothesis subject to the constraint of type I error. When there is more than one test, such as in clinical trials with multiple endpoints, the issues of optimal design and optimal policies become more complex. In this paper we address the question of how such optimal tests should be defined and how they can be found. We review different notions of power and how they relate to study goals, and also consider the requirements of type I error control and the nature of the policies. This leads us to formulate the optimal policy problem as an explicit optimization problem with objective and constraints which describe its specific desiderata. We describe a complete solution for deriving optimal policies for two hypotheses, which have desired monotonicity properties, and are computationally simple. For some of the optimization formulations this yields optimal policies that are identical to existing policies, such as Hommels procedure or the procedure of Bittman et al. (2009), while for others it yields completely novel and more powerful policies than existing ones. We demonstrate the nature of our novel policies and their improved power extensively in simulation and on the APEX study (Cohen et al., 2016).
Simulation offers a simple and flexible way to estimate the power of a clinical trial when analytic formulae are not available. The computational burden of using simulation has, however, restricted its application to only the simplest of sample size determination problems, minimising a single parameter (the overall sample size) subject to power being above a target level. We describe a general framework for solving simulation-based sample size determination problems with several design parameters over which to optimise and several conflicting criteria to be minimised. The method is based on an established global optimisation algorithm widely used in the design and analysis of computer experiments, using a non-parametric regression model as an approximation of the true underlying power function. The method is flexible, can be used for almost any problem for which power can be estimated using simulation, and can be implemented using existing statistical software packages. We illustrate its application to three increasingly complicated sample size determination problems involving complex clustering structures, co-primary endpoints, and small sample considerations.
115 - Changyu Shen , Xiaochun Li 2019
Phase III randomized clinical trials play a monumentally critical role in the evaluation of new medical products. Because of the intrinsic nature of uncertainty embedded in our capability in assessing the efficacy of a medical product, interpretation of trial results relies on statistical principles to control the error of false positives below desirable level. The well-established statistical hypothesis testing procedure suffers from two major limitations, namely, the lack of flexibility in the thresholds to claim success and the lack of capability of controlling the total number of false positives that could be yielded by the large volume of trials. We propose two general theoretical frameworks based on the conventional frequentist paradigm and Bayesian perspectives, which offer realistic, flexible and effective solutions to these limitations. Our methods are based on the distribution of the effect sizes of the population of trials of interest. The estimation of this distribution is practically feasible as clinicaltrials.gov provides a centralized data repository with unbiased coverage of clinical trials. We provide a detailed development of the two frameworks with numerical results obtained for industry sponsored Phase III randomized clinical trials.
Response-adaptive randomization (RAR) is part of a wider class of data-dependent sampling algorithms, for which clinical trials are used as a motivating application. In that context, patient allocation to treatments is determined by randomization pro babilities that are altered based on the accrued response data in order to achieve experimental goals. RAR has received abundant theoretical attention from the biostatistical literature since the 1930s and has been the subject of numerous debates. In the last decade, it has received renewed consideration from the applied and methodological communities, driven by successful practical examples and its widespread use in machine learning. Papers on the subject can give different views on its usefulness, and reconciling these may be difficult. This work aims to address this gap by providing a unified, broad and up-to-date review of methodological and practical issues to consider when debating the use of RAR in clinical trials.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا