No Arabic abstract
Adaptive designs for clinical trials permit alterations to a study in response to accumulating data in order to make trials more flexible, ethical and efficient. These benefits are achieved while preserving the integrity and validity of the trial, through the pre-specification and proper adjustment for the possible alterations during the course of the trial. Despite much research in the statistical literature highlighting the potential advantages of adaptive designs over traditional fixed designs, the uptake of such methods in clinical research has been slow. One major reason for this is that different adaptations to trial designs, as well as their advantages and limitations, remain unfamiliar to large parts of the clinical community. The aim of this paper is to clarify where adaptive designs can be used to address specific questions of scientific interest; we introduce the main features of adaptive designs and commonly used terminology, highlighting their utility and pitfalls, and illustrate their use through case studies of adaptive trials ranging from early-phase dose escalation to confirmatory Phase III studies.
Nowadays, more and more clinical trials choose combinational agents as the intervention to achieve better therapeutic responses. However, dose-finding for combinational agents is much more complicated than single agent as the full order of combination dose toxicity is unknown. Therefore, regular phase I designs are not able to identify the maximum tolerated dose (MTD) of combinational agents. Motivated by such needs, plenty of novel phase I clinical trial designs for combinational agents were proposed. With so many available designs, research that compare their performances, explore parameters impacts, and provide recommendations is very limited. Therefore, we conducted a simulation study to evaluate multiple phase I designs that proposed to identify single MTD for combinational agents under various scenarios. We also explored influences of different design parameters. In the end, we summarized the pros and cons of each design, and provided a general guideline in design selection.
Algorithms which compute locally optimal continuous designs often rely on a finite design space or on repeatedly solving a complex non-linear program. Both methods require extensive evaluations of the Jacobian Df of the underlying model. These evaluations present a heavy computational burden. Based on the Kiefer-Wolfowitz Equivalence Theorem we present a novel design of experiments algorithm which computes optimal designs in a continuous design space. For this iterative algorithm we combine an adaptive Bayes-like sampling scheme with Gaussian process regression to approximate the directional derivative of the design criterion. The approximation allows us to adaptively select new design points on which to evaluate the model. The adaptive selection of the algorithm requires significantly less evaluations of Df and reduces the runtime of the computations. We show the viability of the new algorithm on two examples from chemical engineering.
Observational studies are valuable for estimating the effects of various medical interventions, but are notoriously difficult to evaluate because the methods used in observational studies require many untestable assumptions. This lack of verifiability makes it difficult both to compare different observational study methods and to trust the results of any particular observational study. In this work, we propose TrialVerify, a new approach for evaluating observational study methods based on ground truth sourced from clinical trial reports. We process trial reports into a denoised collection of known causal relationships that can then be used to estimate the precision and recall of various observational study methods. We then use TrialVerify to evaluate multiple observational study methods in terms of their ability to identify the known causal relationships from a large national insurance claims dataset. We found that inverse propensity score weighting is an effective approach for accurately reproducing known causal relationships and outperforms other observational study methods. TrialVerify is made freely available for others to evaluate observational study methods.
In this guide, we present how to perform constraint-based causal discovery using three popular software packages: pcalg (with add-ons tpc and micd), bnlearn, and TETRAD. We focus on how these packages can be used with observational data and in the presence of mixed data (i.e., data where some variables are continuous, while others are categorical), a known time ordering between variables, and missing data. Throughout, we point out the relative strengths and limitations of each package, as well as give practical recommendations. We hope this guide helps anyone who is interested in performing constraint-based causal discovery on their data.
A utility-based Bayesian population finding (BaPoFi) method was proposed by Morita and Muller (2017, Biometrics, 1355-1365) to analyze data from a randomized clinical trial with the aim of identifying good predictive baseline covariates for optimizing the target population for a future study. The approach casts the population finding process as a formal decision problem together with a flexible probability model using a random forest to define a regression mean function. BaPoFi is constructed to handle a single continuous or binary outcome variable. In this paper, we develop BaPoFi-TTE as an extension of the earlier approach for clinically important cases of time-to-event (TTE) data with censoring, and also accounting for a toxicity outcome. We model the association of TTE data with baseline covariates using a semi-parametric failure time model with a Polya tree prior for an unknown error term and a random forest for a flexible regression mean function. We define a utility function that addresses a trade-off between efficacy and toxicity as one of the important clinical considerations for population finding. We examine the operating characteristics of the proposed method in extensive simulation studies. For illustration, we apply the proposed method to data from a randomized oncology clinical trial. Concerns in a preliminary analysis of the same data based on a parametric model motivated the proposed more general approach.