ترغب بنشر مسار تعليمي؟ اضغط هنا

Borrowing of information across patient subgroups in a basket trial based on distributional discrepancy

62   0   0.0 ( 0 )
 نشر من قبل Haiyan Zheng
 تاريخ النشر 2019
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Basket trials have emerged as a new class of efficient approaches in oncology to evaluate a new treatment in several patient subgroups simultaneously. In this paper, we extend the key ideas to disease areas outside of oncology, developing a robust Bayesian methodology for randomised, placebo-controlled basket trials with a continuous endpoint to enable borrowing of information across subtrials with similar treatment effects. After adjusting for covariates, information from a complementary subtrial can be represented into a commensurate prior for the parameter that underpins the subtrial under consideration. We propose using distributional discrepancy to characterise the commensurability between subtrials for appropriate borrowing of information through a spike-and-slab prior, which is placed on the prior precision factor. When the basket trial has at least three subtrials, commensurate priors for point-to-point borrowing are combined into a marginal predictive prior, according to the weights transformed from the pairwise discrepancy measures. In this way, only information from subtrial(s) with the most commensurate treatment effect is leveraged. The marginal predictive prior is updated to a robust posterior by the contemporary subtrial data to inform decision making. Operating characteristics of the proposed methodology are evaluated through simulations motivated by a real basket trial in chronic diseases. The proposed methodology has advantages compared to other selected Bayesian analysis models, for (i) identifying the most commensurate source of information, and (ii) gauging the degree of borrowing from specific subtrials. Numerical results also suggest that our methodology can improve the precision of estimates and, potentially, the statistical power for hypothesis testing.



قيم البحث

اقرأ أيضاً

Incorporating preclinical animal data, which can be regarded as a special kind of historical data, into phase I clinical trials can improve decision making when very little about human toxicity is known. In this paper, we develop a robust hierarchica l modelling approach to leverage animal data into new phase I clinical trials, where we bridge across non-overlapping, potentially heterogeneous patient subgroups. Translation parameters are used to bring both historical and contemporary data onto a common dosing scale. This leads to feasible exchangeability assumptions that the parameter vectors, which underpin the dose-toxicity relationship per study, are assumed to be drawn from a common distribution. Moreover, human dose-toxicity parameter vectors are assumed to be exchangeable either with the standardised, animal study-specific parameter vectors, or between themselves. Possibility of non-exchangeability for each parameter vector is considered to avoid inferences for extreme subgroups being overly influenced by the other. We illustrate the proposed approach with several trial data examples, and evaluate the operating characteristics of our model compared with several alternatives in a simulation study. Numerical results show that our approach yields robust inferences in circumstances, where data from multiple sources are inconsistent and/or the bridging assumptions are incorrect.
We propose an information borrowing strategy for the design and monitoring of phase II basket trials based on the local multisource exchangeability assumption between baskets (disease types). We construct a flexible statistical design using the propo sed strategy. Our approach partitions potentially heterogeneous baskets into non-exchangeable blocks. Information borrowing is only allowed to occur locally, i.e., among similar baskets within the same block. The amount of borrowing is determined by between-basket similarities. The number of blocks and block memberships are inferred from data based on the posterior probability of each partition. The proposed method is compared to the multisource exchangeability model and Simons two-stage design, respectively. In a variety of simulation scenarios, we demonstrate the proposed method is able to maintain the type I error rate and have desirable basket-wise power. In addition, our method is computationally efficient compared to existing Bayesian methods in that the posterior profiles of interest can be derived explicitly without the need for sampling algorithms.
Tissue-agnostic trials enroll patients based on their genetic biomarkers, not tumor type, in an attempt to determine if a new drug can successfully treat disease conditions based on biomarkers. The Bayesian hierarchical model (BHM) provides an attrac tive approach to design phase II tissue-agnostic trials by allowing information borrowing across multiple disease types. In this article, we elucidate two intrinsic and inevitable issues that may limit the use of BHM to tissue-agnostic trials: sensitivity to the prior specification of the shrinkage parameter and the competing interest among disease types in increasing power and controlling type I error. To address these issues, we propose the optimal BHM (OBHM) approach. With OBHM, we first specify a flexible utility function to quantify the tradeoff between type I error and power across disease type based on the study objectives, and then we select the prior of the shrinkage parameter to optimize the utility function of clinical and regulatory interest. OBMH effectively balances type I and II errors, addresses the sensitivity of the prior selection, and reduces the unwarranted subjectivity in the prior selection. Simulation study shows that the resulting OBHM and its extensions, clustered OBHM (COBHM) and adaptive OBHM (AOBHM), have desirable operating characteristics, outperforming some existing methods with better balanced power and type I error control. Our method provides a systematic, rigorous way to apply BHM and solve the common problem of blindingly using a non-informative inverse-gamma prior (with a large variance) or priors arbitrarily chosen that may lead to pathological statistical properties.
218 - Yan-Cheng Chao 2020
A small n, sequential, multiple assignment, randomized trial (snSMART) is a small sample, two-stage design where participants receive up to two treatments sequentially, but the second treatment depends on response to the first treatment. The treatmen t effect of interest in an snSMART is the first-stage response rate, but outcomes from both stages can be used to obtain more information from a small sample. A novel way to incorporate the outcomes from both stages applies power prior models, in which first stage outcomes from an snSMART are regarded as the primary data and second stage outcomes are regarded as supplemental. We apply existing power prior models to snSMART data, and we also develop new extensions of power prior models. All methods are compared to each other and to the Bayesian joint stage model (BJSM) via simulation studies. By comparing the biases and the efficiency of the response rate estimates among all proposed power prior methods, we suggest application of Fishers exact test or the Bhattacharyyas overlap measure to an snSMART to estimate the treatment effect in an snSMART, which both have performance mostly as good or better than the BJSM. We describe the situations where each of these suggested approaches is preferred.
Knockoffs provide a general framework for controlling the false discovery rate when performing variable selection. Much of the Knockoffs literature focuses on theoretical challenges and we recognize a need for bringing some of the current ideas into practice. In this paper we propose a sequential algorithm for generating knockoffs when underlying data consists of both continuous and categorical (factor) variables. Further, we present a heuristic multiple knockoffs approach that offers a practical assessment of how robust the knockoff selection process is for a given data set. We conduct extensive simulations to validate performance of the proposed methodology. Finally, we demonstrate the utility of the methods on a large clinical data pool of more than $2,000$ patients with psoriatic arthritis evaluated in 4 clinical trials with an IL-17A inhibitor, secukinumab (Cosentyx), where we determine prognostic factors of a well established clinical outcome. The analyses presented in this paper could provide a wide range of applications to commonly encountered data sets in medical practice and other fields where variable selection is of particular interest.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا