ترغب بنشر مسار تعليمي؟ اضغط هنا

A Semi-parametric Bayesian Approach to Population Finding with Time-to-Event and Toxicity Data in a Randomized Clinical Trial

91   0   0.0 ( 0 )
 نشر من قبل Satoshi Morita
 تاريخ النشر 2019
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

A utility-based Bayesian population finding (BaPoFi) method was proposed by Morita and Muller (2017, Biometrics, 1355-1365) to analyze data from a randomized clinical trial with the aim of identifying good predictive baseline covariates for optimizing the target population for a future study. The approach casts the population finding process as a formal decision problem together with a flexible probability model using a random forest to define a regression mean function. BaPoFi is constructed to handle a single continuous or binary outcome variable. In this paper, we develop BaPoFi-TTE as an extension of the earlier approach for clinically important cases of time-to-event (TTE) data with censoring, and also accounting for a toxicity outcome. We model the association of TTE data with baseline covariates using a semi-parametric failure time model with a Polya tree prior for an unknown error term and a random forest for a flexible regression mean function. We define a utility function that addresses a trade-off between efficacy and toxicity as one of the important clinical considerations for population finding. We examine the operating characteristics of the proposed method in extensive simulation studies. For illustration, we apply the proposed method to data from a randomized oncology clinical trial. Concerns in a preliminary analysis of the same data based on a parametric model motivated the proposed more general approach.

قيم البحث

اقرأ أيضاً

A major practical impediment when implementing adaptive dose-finding designs is that the toxicity outcome used by the decision rules may not be observed shortly after the initiation of the treatment. To address this issue, we propose the data augment ation continual reassessment method (DA-CRM) for dose finding. By naturally treating the unobserved toxicities as missing data, we show that such missing data are nonignorable in the sense that the missingness depends on the unobserved outcomes. The Bayesian data augmentation approach is used to sample both the missing data and model parameters from their posterior full conditional distributions. We evaluate the performance of the DA-CRM through extensive simulation studies and also compare it with other existing methods. The results show that the proposed design satisfactorily resolves the issues related to late-onset toxicities and possesses desirable operating characteristics: treating patients more safely and also selecting the maximum tolerated dose with a higher probability. The new DA-CRM is illustrated with two phase I cancer clinical trials.
Some years ago, Snapinn and Jiang[1] considered the interpretation and pitfalls of absolute versus relative treatment effect measures in analyses of time-to-event outcomes. Through specific examples and analytical considerations based solely on the e xponential and the Weibull distributions they reach two conclusions: 1) that the commonly used criteria for clinical effectiveness, the ARR (Absolute Risk Reduction) and the median (survival time) difference (MD) directly contradict each other and 2) cost-effectiveness depends only the hazard ratio(HR) and the shape parameter (in the Weibull case) but not the overall baseline risk of the population. Though provocative, the first conclusion does not apply to either the two special cases considered or even more generally, while the second conclusion is strictly correct only for the exponential case. Therefore, the implication inferred by the authors i.e. all measures of absolute treatment effect are of little value compared with the relative measure of the hazard ratio, is not of general validity and hence both absolute and relative measures should continue to be used when appraising clinical evidence.
Early detection of changes in the frequency of events is an important task, in, for example, disease surveillance, monitoring of high-quality processes, reliability monitoring and public health. In this article, we focus on detecting changes in multi variate event data, by monitoring the time-between-events (TBE). Existing multivariate TBE charts are limited in the sense that, they only signal after an event occurred for each of the individual processes. This results in delays (i.e., long time to signal), especially if it is of interest to detect a change in one or a few of the processes. We propose a bivariate TBE (BTBE) chart which is able to signal in real time. We derive analytical expressions for the control limits and average time-to-signal performance, conduct a performance evaluation and compare our chart to an existing method. The findings showed that our method is a realistic approach to monitor bivariate time-between-event data, and has better detection ability than existing methods. A large benefit of our method is that it signals in real-time and that due to the analytical expressions no simulation is needed. The proposed method is implemented on a real-life dataset related to AIDS.
The detection and analysis of events within massive collections of time-series has become an extremely important task for time-domain astronomy. In particular, many scientific investigations (e.g. the analysis of microlensing and other transients) be gin with the detection of isolated events in irregularly-sampled series with both non-linear trends and non-Gaussian noise. We outline a semi-parametric, robust, parallel method for identifying variability and isolated events at multiple scales in the presence of the above complications. This approach harnesses the power of Bayesian modeling while maintaining much of the speed and scalability of more ad-hoc machine learning approaches. We also contrast this work with event detection methods from other fields, highlighting the unique challenges posed by astronomical surveys. Finally, we present results from the application of this method to 87.2 million EROS-2 sources, where we have obtained a greater than 100-fold reduction in candidates for certain types of phenomena while creating high-quality features for subsequent analyses.
Observational studies are valuable for estimating the effects of various medical interventions, but are notoriously difficult to evaluate because the methods used in observational studies require many untestable assumptions. This lack of verifiabilit y makes it difficult both to compare different observational study methods and to trust the results of any particular observational study. In this work, we propose TrialVerify, a new approach for evaluating observational study methods based on ground truth sourced from clinical trial reports. We process trial reports into a denoised collection of known causal relationships that can then be used to estimate the precision and recall of various observational study methods. We then use TrialVerify to evaluate multiple observational study methods in terms of their ability to identify the known causal relationships from a large national insurance claims dataset. We found that inverse propensity score weighting is an effective approach for accurately reproducing known causal relationships and outperforms other observational study methods. TrialVerify is made freely available for others to evaluate observational study methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا