Do you want to publish a course? Click here

Sample size considerations for comparing dynamic treatment regimens in a sequential multiple-assignment randomized trial with a continuous longitudinal outcome

85   0   0.0 ( 0 )
 Added by Nicholas Seewald
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Clinicians and researchers alike are increasingly interested in how best to personalize interventions. A dynamic treatment regimen (DTR) is a sequence of pre-specified decision rules which can be used to guide the delivery of a sequence of treatments or interventions that are tailored to the changing needs of the individual. The sequential multiple-assignment randomized trial (SMART) is a research tool which allows for the construction of effective DTRs. We derive easy-to-use formulae for computing the total sample size for three common two-stage SMART designs in which the primary aim is to compare mean end-of-study outcomes for two embedded DTRs which recommend different first-stage treatments. The formulae are derived in the context of a regression model which leverages information from a longitudinal outcome collected over the entire study. We show that the sample size formula for a SMART can be written as the product of the sample size formula for a standard two-arm randomized trial, a deflation factor that accounts for the increased statistical efficiency resulting from a longitudinal analysis, and an inflation factor that accounts for the design of a SMART. The SMART design inflation factor is typically a function of the anticipated probability of response to first-stage treatment. We review modeling and estimation for DTR effect analyses using a longitudinal outcome from a SMART, as well as the estimation of standard errors. We also present estimators for the covariance matrix for a variety of common working correlation structures. Methods are motivated using the ENGAGE study, a SMART aimed at developing a DTR for increasing motivation to attend treatments among alcohol- and cocaine-dependent patients.



rate research

Read More

A dynamic treatment regimen (DTR) is a pre-specified sequence of decision rules which maps baseline or time-varying measurements on an individual to a recommended intervention or set of interventions. Sequential multiple assignment randomized trials (SMARTs) represent an important data collection tool for informing the construction of effective DTRs. A common primary aim in a SMART is the marginal mean comparison between two or more of the DTRs embedded in the trial. This manuscript develops a mixed effects modeling and estimation approach for these primary aim comparisons based on a continuous, longitudinal outcome. The method is illustrated using data from a SMART in autism research.
In many health domains such as substance-use, outcomes are often counts with an excessive number of zeros (EZ) - count data having zero counts at a rate significantly higher than that expected of a standard count distribution (e.g., Poisson). However, an important gap exists in sample size estimation methodology for planning sequential multiple assignment randomized trials (SMARTs) for comparing dynamic treatment regimens (DTRs) using longitudinal count data. DTRs, also known as treatment algorithms or adaptive interventions, mimic the individualized and evolving nature of patient care through the specification of decision rules guiding the type, timing and modality of delivery, and dosage of treatments to address the unique and changing needs of individuals. To close this gap, we develop a Monte Carlo-based approach to sample size estimation. A SMART for engaging alcohol and cocaine-dependent patients in treatment is used as motivation.
218 - Yan-Cheng Chao 2020
A small n, sequential, multiple assignment, randomized trial (snSMART) is a small sample, two-stage design where participants receive up to two treatments sequentially, but the second treatment depends on response to the first treatment. The treatment effect of interest in an snSMART is the first-stage response rate, but outcomes from both stages can be used to obtain more information from a small sample. A novel way to incorporate the outcomes from both stages applies power prior models, in which first stage outcomes from an snSMART are regarded as the primary data and second stage outcomes are regarded as supplemental. We apply existing power prior models to snSMART data, and we also develop new extensions of power prior models. All methods are compared to each other and to the Bayesian joint stage model (BJSM) via simulation studies. By comparing the biases and the efficiency of the response rate estimates among all proposed power prior methods, we suggest application of Fishers exact test or the Bhattacharyyas overlap measure to an snSMART to estimate the treatment effect in an snSMART, which both have performance mostly as good or better than the BJSM. We describe the situations where each of these suggested approaches is preferred.
Adaptive interventions (AIs) are increasingly becoming popular in medical and behavioral sciences. An AI is a sequence of individualized intervention options that specify for whom and under what conditions different intervention options should be offered, in order to address the changing needs of individuals as they progress over time. The sequential, multiple assignment, randomized trial (SMART) is a novel trial design that was developed to aid in empirically constructing effective AIs. The sequential randomizations in a SMART often yield multiple AIs that are embedded in the trial by design. Many SMARTs are motivated by scientific questions pertaining to the comparison of such embedded AIs. Existing data analytic methods and sample size planning resources for SMARTs are suitable for superiority testing, namely for testing whether one embedded AI yields better primary outcomes on average than another. This represents a major scientific gap since AIs are often motivated by the need to deliver support/care in a less costly or less burdensome manner, while still yielding benefits that are equivalent or non-inferior to those produced by a more costly/burdensome standard of care. Here, we develop data analytic methods and sample size formulas for SMART studies aiming to test the non-inferiority or equivalence of one AI over another. Sample size and power considerations are discussed with supporting simulations, and online sample size planning resources are provided. For illustration, we use an example from a SMART study aiming to develop an AI for promoting weight loss among overweight/obese adults.
Knockoffs provide a general framework for controlling the false discovery rate when performing variable selection. Much of the Knockoffs literature focuses on theoretical challenges and we recognize a need for bringing some of the current ideas into practice. In this paper we propose a sequential algorithm for generating knockoffs when underlying data consists of both continuous and categorical (factor) variables. Further, we present a heuristic multiple knockoffs approach that offers a practical assessment of how robust the knockoff selection process is for a given data set. We conduct extensive simulations to validate performance of the proposed methodology. Finally, we demonstrate the utility of the methods on a large clinical data pool of more than $2,000$ patients with psoriatic arthritis evaluated in 4 clinical trials with an IL-17A inhibitor, secukinumab (Cosentyx), where we determine prognostic factors of a well established clinical outcome. The analyses presented in this paper could provide a wide range of applications to commonly encountered data sets in medical practice and other fields where variable selection is of particular interest.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا