No Arabic abstract
In many health domains such as substance-use, outcomes are often counts with an excessive number of zeros (EZ) - count data having zero counts at a rate significantly higher than that expected of a standard count distribution (e.g., Poisson). However, an important gap exists in sample size estimation methodology for planning sequential multiple assignment randomized trials (SMARTs) for comparing dynamic treatment regimens (DTRs) using longitudinal count data. DTRs, also known as treatment algorithms or adaptive interventions, mimic the individualized and evolving nature of patient care through the specification of decision rules guiding the type, timing and modality of delivery, and dosage of treatments to address the unique and changing needs of individuals. To close this gap, we develop a Monte Carlo-based approach to sample size estimation. A SMART for engaging alcohol and cocaine-dependent patients in treatment is used as motivation.
One of the main goals of sequential, multiple assignment, randomized trials (SMART) is to find the most efficacious design embedded dynamic treatment regimes. The analysis method known as multiple comparisons with the best (MCB) allows comparison between dynamic treatment regimes and identification of a set of optimal regimes in the frequentist setting for continuous outcomes, thereby, directly addressing the main goal of a SMART. In this paper, we develop a Bayesian generalization to MCB for SMARTs with binary outcomes. Furthermore, we show how to choose the sample size so that the inferior embedded DTRs are screened out with a specified power. We compare log-odds between different DTRs using their exact distribution without relying on asymptotic normality in either the analysis or the power calculation. We conduct extensive simulation studies under two SMART designs and illustrate our methods application to the Adaptive Treatment for Alcohol and Cocaine Dependence (ENGAGE) trial.
Clinicians and researchers alike are increasingly interested in how best to personalize interventions. A dynamic treatment regimen (DTR) is a sequence of pre-specified decision rules which can be used to guide the delivery of a sequence of treatments or interventions that are tailored to the changing needs of the individual. The sequential multiple-assignment randomized trial (SMART) is a research tool which allows for the construction of effective DTRs. We derive easy-to-use formulae for computing the total sample size for three common two-stage SMART designs in which the primary aim is to compare mean end-of-study outcomes for two embedded DTRs which recommend different first-stage treatments. The formulae are derived in the context of a regression model which leverages information from a longitudinal outcome collected over the entire study. We show that the sample size formula for a SMART can be written as the product of the sample size formula for a standard two-arm randomized trial, a deflation factor that accounts for the increased statistical efficiency resulting from a longitudinal analysis, and an inflation factor that accounts for the design of a SMART. The SMART design inflation factor is typically a function of the anticipated probability of response to first-stage treatment. We review modeling and estimation for DTR effect analyses using a longitudinal outcome from a SMART, as well as the estimation of standard errors. We also present estimators for the covariance matrix for a variety of common working correlation structures. Methods are motivated using the ENGAGE study, a SMART aimed at developing a DTR for increasing motivation to attend treatments among alcohol- and cocaine-dependent patients.
A dynamic treatment regimen (DTR) is a pre-specified sequence of decision rules which maps baseline or time-varying measurements on an individual to a recommended intervention or set of interventions. Sequential multiple assignment randomized trials (SMARTs) represent an important data collection tool for informing the construction of effective DTRs. A common primary aim in a SMART is the marginal mean comparison between two or more of the DTRs embedded in the trial. This manuscript develops a mixed effects modeling and estimation approach for these primary aim comparisons based on a continuous, longitudinal outcome. The method is illustrated using data from a SMART in autism research.
Sequential Multiple Assignment Randomized Trials (SMARTs) are considered the gold standard for estimation and evaluation of treatment regimes. SMARTs are typically sized to ensure sufficient power for a simple comparison, e.g., the comparison of two fixed treatment sequences. Estimation of an optimal treatment regime is conducted as part of a secondary and hypothesis-generating analysis with formal evaluation of the estimated optimal regime deferred to a follow-up trial. However, running a follow-up trial to evaluate an estimated optimal treatment regime is costly and time-consuming; furthermore, the estimated optimal regime that is to be evaluated in such a follow-up trial may be far from optimal if the original trial was underpowered for estimation of an optimal regime. We derive sample size procedures for a SMART that ensure: (i) sufficient power for comparing the optimal treatment regime with standard of care; and (ii) the estimated optimal regime is within a given tolerance of the true optimal regime with high-probability. We establish asymptotic validity of the proposed procedures and demonstrate their finite sample performance in a series of simulation experiments.
We propose BaySize, a sample size calculator for phase I clinical trials using Bayesian models. BaySize applies the concept of effect size in dose finding, assuming the MTD is defined based on an equivalence interval. Leveraging a decision framework that involves composite hypotheses, BaySize utilizes two prior distributions, the fitting prior (for model fitting) and sampling prior (for data generation), to conduct sample size calculation under desirable statistical power. Look-up tables are generated to facilitate practical applications. To our knowledge, BaySize is the first sample size tool that can be applied to a broad range of phase I trial designs.