ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal multiple testing and design in clinical trials

210   0   0.0 ( 0 )
 نشر من قبل Ruth Heller
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

A central goal in designing clinical trials is to find the test that maximizes power (or equivalently minimizes required sample size) for finding a true research hypothesis subject to the constraint of type I error. When there is more than one test, such as in clinical trials with multiple endpoints, the issues of optimal design and optimal policies become more complex. In this paper we address the question of how such optimal tests should be defined and how they can be found. We review different notions of power and how they relate to study goals, and also consider the requirements of type I error control and the nature of the policies. This leads us to formulate the optimal policy problem as an explicit optimization problem with objective and constraints which describe its specific desiderata. We describe a complete solution for deriving optimal policies for two hypotheses, which have desired monotonicity properties, and are computationally simple. For some of the optimization formulations this yields optimal policies that are identical to existing policies, such as Hommels procedure or the procedure of Bittman et al. (2009), while for others it yields completely novel and more powerful policies than existing ones. We demonstrate the nature of our novel policies and their improved power extensively in simulation and on the APEX study (Cohen et al., 2016).

قيم البحث

اقرأ أيضاً

287 - Li Yang , Wei Ma , Yichen Qin 2020
Concerns have been expressed over the validity of statistical inference under covariate-adaptive randomization despite the extensive use in clinical trials. In the literature, the inferential properties under covariate-adaptive randomization have bee n mainly studied for continuous responses; in particular, it is well known that the usual two sample t-test for treatment effect is typically conservative, in the sense that the actual test size is smaller than the nominal level. This phenomenon of invalid tests has also been found for generalized linear models without adjusting for the covariates and are sometimes more worrisome due to inflated Type I error. The purpose of this study is to examine the unadjusted test for treatment effect under generalized linear models and covariate-adaptive randomization. For a large class of covariate-adaptive randomization methods, we obtain the asymptotic distribution of the test statistic under the null hypothesis and derive the conditions under which the test is conservative, valid, or anti-conservative. Several commonly used generalized linear models, such as logistic regression and Poisson regression, are discussed in detail. An adjustment method is also proposed to achieve a valid size based on the asymptotic results. Numerical studies confirm the theoretical findings and demonstrate the effectiveness of the proposed adjustment method.
336 - James E. Barrett 2015
We propose a novel adaptive design for clinical trials with time-to-event outcomes and covariates (which may consist of or include biomarkers). Our method is based on the expected entropy of the posterior distribution of a proportional hazards model. The expected entropy is evaluated as a function of a patients covariates, and the information gained due to a patient is defined as the decrease in the corresponding entropy. Candidate patients are only recruited onto the trial if they are likely to provide sufficient information. Patients with covariates that are deemed uninformative are filtered out. A special case is where all patients are recruited, and we determine the optimal treatment arm allocation. This adaptive design has the advantage of potentially elucidating the relationship between covariates, treatments, and survival probabilities using fewer patients, albeit at the cost of rejecting some candidates. We assess the performance of our adaptive design using data from the German Breast Cancer Study group and numerical simulations of a biomarker validation trial.
Simulation offers a simple and flexible way to estimate the power of a clinical trial when analytic formulae are not available. The computational burden of using simulation has, however, restricted its application to only the simplest of sample size determination problems, minimising a single parameter (the overall sample size) subject to power being above a target level. We describe a general framework for solving simulation-based sample size determination problems with several design parameters over which to optimise and several conflicting criteria to be minimised. The method is based on an established global optimisation algorithm widely used in the design and analysis of computer experiments, using a non-parametric regression model as an approximation of the true underlying power function. The method is flexible, can be used for almost any problem for which power can be estimated using simulation, and can be implemented using existing statistical software packages. We illustrate its application to three increasingly complicated sample size determination problems involving complex clustering structures, co-primary endpoints, and small sample considerations.
147 - Suyu Liu , Ying Yuan 2013
Interval designs are a class of phase I trial designs for which the decision of dose assignment is determined by comparing the observed toxicity rate at the current dose with a prespecified (toxicity tolerance) interval. If the observed toxicity rate is located within the interval, we retain the current dose; if the observed toxicity rate is greater than the upper boundary of the interval, we deescalate the dose; and if the observed toxicity rate is smaller than the lower boundary of the interval, we escalate the dose. The most critical issue for the interval design is choosing an appropriate interval so that the design has good operating characteristics. By casting dose finding as a Bayesian decision-making problem, we propose new flexible methods to select the interval boundaries so as to minimize the probability of inappropriate dose assignment for patients. We show, both theoretically and numerically, that the resulting optimal interval designs not only have desirable finite- and large-sample properties, but also are particularly easy to implement in practice. Compared to existing designs, the proposed (local) optimal design has comparable average performance, but a lower risk of yielding a poorly performing clinical trial.
88 - Ziyu Xu , Aaditya Ramdas 2020
We derive new algorithms for online multiple testing that provably control false discovery exceedance (FDX) while achieving orders of magnitude more power than previous methods. This statistical advance is enabled by the development of new algorithmi c ideas: earlier algorithms are more static while our new ones allow for the dynamical adjustment of testing levels based on the amount of wealth the algorithm has accumulated. We demonstrate that our algorithms achieve higher power in a variety of synthetic experiments. We also prove that SupLORD can provide error control for both FDR and FDX, and controls FDR at stopping times. Stopping times are particularly important as they permit the experimenter to end the experiment arbitrarily early while maintaining desired control of the FDR. SupLORD is the first non-trivial algorithm, to our knowledge, that can control FDR at stopping times in the online setting.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا