No Arabic abstract
External pilot trials of complex interventions are used to help determine if and how a confirmatory trial should be undertaken, providing estimates of parameters such as recruitment, retention and adherence rates. The decision to progress to the confirmatory trial is typically made by comparing these estimates to pre-specified thresholds known as progression criteria, although the statistical properties of such decision rules are rarely assessed. Such assessment is complicated by several methodological challenges, including the simultaneous evaluation of multiple endpoints, complex multi-level models, small sample sizes, and uncertainty in nuisance parameters. In response to these challenges, we describe a Bayesian approach to the design and analysis of external pilot trials. We show how progression decisions can be made by minimising the expected value of a loss function, defined over the whole parameter space to allow for preferences and trade-offs between multiple parameters to be articulated and used in the decision making process. The assessment of preferences is kept feasible by using a piecewise constant parameterisation of the loss function, the parameters of which are chosen at the design stage to lead to desirable operating characteristics. We describe a flexible, yet computationally intensive, nested Monte Carlo algorithm for estimating operating characteristics. The method is used to revisit the design of an external pilot trial of a complex intervention designed to increase the physical activity of care home residents.
Most clinical trials involve the comparison of a new treatment to a control arm (e.g., the standard of care) and the estimation of a treatment effect. External data, including historical clinical trial data and real-world observational data, are commonly available for the control arm. Borrowing information from external data holds the promise of improving the estimation of relevant parameters and increasing the power of detecting a treatment effect if it exists. In this paper, we propose to use Bayesian additive regression trees (BART) for incorporating external data into the analysis of clinical trials, with a specific goal of estimating the conditional or population average treatment effect. BART naturally adjusts for patient-level covariates and captures potentially heterogeneous treatment effects across different data sources, achieving flexible borrowing. Simulation studies demonstrate that BART compares favorably to a hierarchical linear model and a normal-normal hierarchical model. We illustrate the proposed method with an acupuncture trial.
We propose an information borrowing strategy for the design and monitoring of phase II basket trials based on the local multisource exchangeability assumption between baskets (disease types). We construct a flexible statistical design using the proposed strategy. Our approach partitions potentially heterogeneous baskets into non-exchangeable blocks. Information borrowing is only allowed to occur locally, i.e., among similar baskets within the same block. The amount of borrowing is determined by between-basket similarities. The number of blocks and block memberships are inferred from data based on the posterior probability of each partition. The proposed method is compared to the multisource exchangeability model and Simons two-stage design, respectively. In a variety of simulation scenarios, we demonstrate the proposed method is able to maintain the type I error rate and have desirable basket-wise power. In addition, our method is computationally efficient compared to existing Bayesian methods in that the posterior profiles of interest can be derived explicitly without the need for sampling algorithms.
A central goal in designing clinical trials is to find the test that maximizes power (or equivalently minimizes required sample size) for finding a true research hypothesis subject to the constraint of type I error. When there is more than one test, such as in clinical trials with multiple endpoints, the issues of optimal design and optimal policies become more complex. In this paper we address the question of how such optimal tests should be defined and how they can be found. We review different notions of power and how they relate to study goals, and also consider the requirements of type I error control and the nature of the policies. This leads us to formulate the optimal policy problem as an explicit optimization problem with objective and constraints which describe its specific desiderata. We describe a complete solution for deriving optimal policies for two hypotheses, which have desired monotonicity properties, and are computationally simple. For some of the optimization formulations this yields optimal policies that are identical to existing policies, such as Hommels procedure or the procedure of Bittman et al. (2009), while for others it yields completely novel and more powerful policies than existing ones. We demonstrate the nature of our novel policies and their improved power extensively in simulation and on the APEX study (Cohen et al., 2016).
Integrated phase I-II clinical trial designs are efficient approaches to accelerate drug development. In cases where efficacy cannot be ascertained in a short period of time, two-stage approaches are usually employed. When different patient populations are involved across stages, it is worth of discussion about the use of efficacy data collected from both stages. In this paper, we focus on a two-stage design that aims to estimate safe dose combinations with a certain level of efficacy. In stage I, conditional escalation with overdose control (EWOC) is used to allocate successive cohorts of patients. The maximum tolerated dose (MTD) curve is estimated based on a Bayesian dose-toxicity model. In stage II, we consider an adaptive allocation of patients to drug combinations that have a high probability of being efficacious along the obtained MTD curve. A robust Bayesian hierarchical model is proposed to allow sharing of information on the efficacy parameters across stages assuming the related parameters are either exchangeable or nonexchangeable. Under the assumption of exchangeability, a random-effects distribution is specified for the main effects parameters to capture uncertainty about the between-stage differences. The proposed methodology is assessed with extensive simulations motivated by a real phase I-II drug combination trial using continuous doses.
Interval designs are a class of phase I trial designs for which the decision of dose assignment is determined by comparing the observed toxicity rate at the current dose with a prespecified (toxicity tolerance) interval. If the observed toxicity rate is located within the interval, we retain the current dose; if the observed toxicity rate is greater than the upper boundary of the interval, we deescalate the dose; and if the observed toxicity rate is smaller than the lower boundary of the interval, we escalate the dose. The most critical issue for the interval design is choosing an appropriate interval so that the design has good operating characteristics. By casting dose finding as a Bayesian decision-making problem, we propose new flexible methods to select the interval boundaries so as to minimize the probability of inappropriate dose assignment for patients. We show, both theoretically and numerically, that the resulting optimal interval designs not only have desirable finite- and large-sample properties, but also are particularly easy to implement in practice. Compared to existing designs, the proposed (local) optimal design has comparable average performance, but a lower risk of yielding a poorly performing clinical trial.