ترغب بنشر مسار تعليمي؟ اضغط هنا

A Simulation Study Evaluating Phase I Clinical Trial Designs for Combinational Agents

84   0   0.0 ( 0 )
 نشر من قبل Shu Wang
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Nowadays, more and more clinical trials choose combinational agents as the intervention to achieve better therapeutic responses. However, dose-finding for combinational agents is much more complicated than single agent as the full order of combination dose toxicity is unknown. Therefore, regular phase I designs are not able to identify the maximum tolerated dose (MTD) of combinational agents. Motivated by such needs, plenty of novel phase I clinical trial designs for combinational agents were proposed. With so many available designs, research that compare their performances, explore parameters impacts, and provide recommendations is very limited. Therefore, we conducted a simulation study to evaluate multiple phase I designs that proposed to identify single MTD for combinational agents under various scenarios. We also explored influences of different design parameters. In the end, we summarized the pros and cons of each design, and provided a general guideline in design selection.



قيم البحث

اقرأ أيضاً

86 - Thomas Burnett 2020
Adaptive designs for clinical trials permit alterations to a study in response to accumulating data in order to make trials more flexible, ethical and efficient. These benefits are achieved while preserving the integrity and validity of the trial, th rough the pre-specification and proper adjustment for the possible alterations during the course of the trial. Despite much research in the statistical literature highlighting the potential advantages of adaptive designs over traditional fixed designs, the uptake of such methods in clinical research has been slow. One major reason for this is that different adaptations to trial designs, as well as their advantages and limitations, remain unfamiliar to large parts of the clinical community. The aim of this paper is to clarify where adaptive designs can be used to address specific questions of scientific interest; we introduce the main features of adaptive designs and commonly used terminology, highlighting their utility and pitfalls, and illustrate their use through case studies of adaptive trials ranging from early-phase dose escalation to confirmatory Phase III studies.
Observational studies are valuable for estimating the effects of various medical interventions, but are notoriously difficult to evaluate because the methods used in observational studies require many untestable assumptions. This lack of verifiabilit y makes it difficult both to compare different observational study methods and to trust the results of any particular observational study. In this work, we propose TrialVerify, a new approach for evaluating observational study methods based on ground truth sourced from clinical trial reports. We process trial reports into a denoised collection of known causal relationships that can then be used to estimate the precision and recall of various observational study methods. We then use TrialVerify to evaluate multiple observational study methods in terms of their ability to identify the known causal relationships from a large national insurance claims dataset. We found that inverse propensity score weighting is an effective approach for accurately reproducing known causal relationships and outperforms other observational study methods. TrialVerify is made freely available for others to evaluate observational study methods.
Some years ago, Snapinn and Jiang[1] considered the interpretation and pitfalls of absolute versus relative treatment effect measures in analyses of time-to-event outcomes. Through specific examples and analytical considerations based solely on the e xponential and the Weibull distributions they reach two conclusions: 1) that the commonly used criteria for clinical effectiveness, the ARR (Absolute Risk Reduction) and the median (survival time) difference (MD) directly contradict each other and 2) cost-effectiveness depends only the hazard ratio(HR) and the shape parameter (in the Weibull case) but not the overall baseline risk of the population. Though provocative, the first conclusion does not apply to either the two special cases considered or even more generally, while the second conclusion is strictly correct only for the exponential case. Therefore, the implication inferred by the authors i.e. all measures of absolute treatment effect are of little value compared with the relative measure of the hazard ratio, is not of general validity and hence both absolute and relative measures should continue to be used when appraising clinical evidence.
In early clinical test evaluations the potential benefits of the introduction of a new technology into the healthcare system are assessed in the challenging situation of limited available empirical data. The aim of these evaluations is to provide add itional evidence for the decision maker, who is typically a funder or the company developing the test, to evaluate which technologies should progress to the next stage of evaluation. In this paper we consider the evaluation of a diagnostic test for patients suffering from Chronic Obstructive Pulmonary Disease (COPD). We describe the use of graphical models, prior elicitation and uncertainty analysis to provide the required evidence to allow the test to progress to the next stage of evaluation. We specifically discuss inferring an influence diagram from a care pathway and conducting an elicitation exercise to allow specification of prior distributions over all model parameters. We describe the uncertainty analysis, via Monte Carlo simulation, which allowed us to demonstrate that the potential value of the test was robust to uncertainties. This paper provides a case study illustrating how a careful Bayesian analysis can be used to enhance early clinical test evaluations.
209 - Suyu Liu , Ying Yuan 2013
Interval designs are a class of phase I trial designs for which the decision of dose assignment is determined by comparing the observed toxicity rate at the current dose with a prespecified (toxicity tolerance) interval. If the observed toxicity rate is located within the interval, we retain the current dose; if the observed toxicity rate is greater than the upper boundary of the interval, we deescalate the dose; and if the observed toxicity rate is smaller than the lower boundary of the interval, we escalate the dose. The most critical issue for the interval design is choosing an appropriate interval so that the design has good operating characteristics. By casting dose finding as a Bayesian decision-making problem, we propose new flexible methods to select the interval boundaries so as to minimize the probability of inappropriate dose assignment for patients. We show, both theoretically and numerically, that the resulting optimal interval designs not only have desirable finite- and large-sample properties, but also are particularly easy to implement in practice. Compared to existing designs, the proposed (local) optimal design has comparable average performance, but a lower risk of yielding a poorly performing clinical trial.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا