No Arabic abstract
Phase I dose-finding trials are increasingly challenging as the relationship between efficacy and toxicity of new compounds (or combination of them) becomes more complex. Despite this, most commonly used methods in practice focus on identifying a Maximum Tolerated Dose (MTD) by learning only from toxicity events. We present a novel adaptive clinical trial methodology, called Safe Efficacy Exploration Dose Allocation (SEEDA), that aims at maximizing the cumulative efficacies while satisfying the toxicity safety constraint with high probability. We evaluate performance objectives that have operational meanings in practical clinical trials, including cumulative efficacy, recommendation/allocation success probabilities, toxicity violation probability, and sample efficiency. An extended SEEDA-Plateau algorithm that is tailored for the increase-then-plateau efficacy behavior of molecularly targeted agents (MTA) is also presented. Through numerical experiments using both synthetic and real-world datasets, we show that SEEDA outperforms state-of-the-art clinical trial designs by finding the optimal dose with higher success rate and fewer patients.
In learning-phase clinical trials in drug development, adaptive designs can be efficient and highly informative when used appropriately. In this article, we extend the multiple comparison procedures with modeling techniques (MCP-Mod) procedure with generalized multiple contrast tests (GMCTs) to two-stage adaptive designs for establishing proof-of-concept. The results of an interim analysis of first-stage data are used to adapt the candidate dose-response models and the dosages studied in the second stage. GMCTs are used in both stages to obtain stage-wise p-values, which are then combined to determine an overall p-value. An alternative approach is also considered that combines the t-statistics across stages, employing the conditional rejection probability (CRP) principle to preserve the Type I error probability. Simulation studies demonstrate that the adaptive designs are advantageous compared to the corresponding tests in a non-adaptive design if the selection of the candidate set of dose-response models is not well informed by evidence from preclinical and early-phase studies.
Response-adaptive randomization (RAR) is part of a wider class of data-dependent sampling algorithms, for which clinical trials are used as a motivating application. In that context, patient allocation to treatments is determined by randomization probabilities that are altered based on the accrued response data in order to achieve experimental goals. RAR has received abundant theoretical attention from the biostatistical literature since the 1930s and has been the subject of numerous debates. In the last decade, it has received renewed consideration from the applied and methodological communities, driven by successful practical examples and its widespread use in machine learning. Papers on the subject can give different views on its usefulness, and reconciling these may be difficult. This work aims to address this gap by providing a unified, broad and up-to-date review of methodological and practical issues to consider when debating the use of RAR in clinical trials.
We propose BaySize, a sample size calculator for phase I clinical trials using Bayesian models. BaySize applies the concept of effect size in dose finding, assuming the MTD is defined based on an equivalence interval. Leveraging a decision framework that involves composite hypotheses, BaySize utilizes two prior distributions, the fitting prior (for model fitting) and sampling prior (for data generation), to conduct sample size calculation under desirable statistical power. Look-up tables are generated to facilitate practical applications. To our knowledge, BaySize is the first sample size tool that can be applied to a broad range of phase I trial designs.
Detection of interactions between treatment effects and patient descriptors in clinical trials is critical for optimizing the drug development process. The increasing volume of data accumulated in clinical trials provides a unique opportunity to discover new biomarkers and further the goal of personalized medicine, but it also requires innovative robust biomarker detection methods capable of detecting non-linear, and sometimes weak, signals. We propose a set of novel univariate statistical tests, based on the theory of random walks, which are able to capture non-linear and non-monotonic covariate-treatment interactions. We also propose a novel combined test, which leverages the power of all of our proposed univariate tests into a single general-case tool. We present results for both synthetic trials as well as real-world clinical trials, where we compare our method with state-of-the-art techniques and demonstrate the utility and robustness of our approach.
Selective recruitment designs preferentially recruit individuals that are estimated to be statistically informative onto a clinical trial. Individuals that are expected to contribute less information have a lower probability of recruitment. Furthermore, in an information-adaptive design recruits are allocated to treatment arms in a manner that maximises information gain. The informativeness of an individual depends on their covariate (or biomarker) values and how information is defined is a critical element of information-adaptive designs. In this paper we define and evaluate four different methods for quantifying statistical information. Using both experimental data and numerical simulations we show that selective recruitment designs can offer a substantial increase in statistical power compared to randomised designs. In trials without selective recruitment we find that allocating individuals to treatment arms according to information-adaptive protocols also leads to an increase in statistical power. Consequently, selective recruitment designs can potentially achieve successful trials using fewer recruits thereby offering economic and ethical advantages.