ترغب بنشر مسار تعليمي؟ اضغط هنا

Non-Inferiority and Equivalence Tests in A Sequential Multiple-Assignment Randomized Trial (SMART)

78   0   0.0 ( 0 )
 نشر من قبل Palash Ghosh
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Adaptive interventions (AIs) are increasingly becoming popular in medical and behavioral sciences. An AI is a sequence of individualized intervention options that specify for whom and under what conditions different intervention options should be offered, in order to address the changing needs of individuals as they progress over time. The sequential, multiple assignment, randomized trial (SMART) is a novel trial design that was developed to aid in empirically constructing effective AIs. The sequential randomizations in a SMART often yield multiple AIs that are embedded in the trial by design. Many SMARTs are motivated by scientific questions pertaining to the comparison of such embedded AIs. Existing data analytic methods and sample size planning resources for SMARTs are suitable for superiority testing, namely for testing whether one embedded AI yields better primary outcomes on average than another. This represents a major scientific gap since AIs are often motivated by the need to deliver support/care in a less costly or less burdensome manner, while still yielding benefits that are equivalent or non-inferior to those produced by a more costly/burdensome standard of care. Here, we develop data analytic methods and sample size formulas for SMART studies aiming to test the non-inferiority or equivalence of one AI over another. Sample size and power considerations are discussed with supporting simulations, and online sample size planning resources are provided. For illustration, we use an example from a SMART study aiming to develop an AI for promoting weight loss among overweight/obese adults.



قيم البحث

اقرأ أيضاً

218 - Yan-Cheng Chao 2020
A small n, sequential, multiple assignment, randomized trial (snSMART) is a small sample, two-stage design where participants receive up to two treatments sequentially, but the second treatment depends on response to the first treatment. The treatmen t effect of interest in an snSMART is the first-stage response rate, but outcomes from both stages can be used to obtain more information from a small sample. A novel way to incorporate the outcomes from both stages applies power prior models, in which first stage outcomes from an snSMART are regarded as the primary data and second stage outcomes are regarded as supplemental. We apply existing power prior models to snSMART data, and we also develop new extensions of power prior models. All methods are compared to each other and to the Bayesian joint stage model (BJSM) via simulation studies. By comparing the biases and the efficiency of the response rate estimates among all proposed power prior methods, we suggest application of Fishers exact test or the Bhattacharyyas overlap measure to an snSMART to estimate the treatment effect in an snSMART, which both have performance mostly as good or better than the BJSM. We describe the situations where each of these suggested approaches is preferred.
Clinicians and researchers alike are increasingly interested in how best to personalize interventions. A dynamic treatment regimen (DTR) is a sequence of pre-specified decision rules which can be used to guide the delivery of a sequence of treatments or interventions that are tailored to the changing needs of the individual. The sequential multiple-assignment randomized trial (SMART) is a research tool which allows for the construction of effective DTRs. We derive easy-to-use formulae for computing the total sample size for three common two-stage SMART designs in which the primary aim is to compare mean end-of-study outcomes for two embedded DTRs which recommend different first-stage treatments. The formulae are derived in the context of a regression model which leverages information from a longitudinal outcome collected over the entire study. We show that the sample size formula for a SMART can be written as the product of the sample size formula for a standard two-arm randomized trial, a deflation factor that accounts for the increased statistical efficiency resulting from a longitudinal analysis, and an inflation factor that accounts for the design of a SMART. The SMART design inflation factor is typically a function of the anticipated probability of response to first-stage treatment. We review modeling and estimation for DTR effect analyses using a longitudinal outcome from a SMART, as well as the estimation of standard errors. We also present estimators for the covariance matrix for a variety of common working correlation structures. Methods are motivated using the ENGAGE study, a SMART aimed at developing a DTR for increasing motivation to attend treatments among alcohol- and cocaine-dependent patients.
A utility-based Bayesian population finding (BaPoFi) method was proposed by Morita and Muller (2017, Biometrics, 1355-1365) to analyze data from a randomized clinical trial with the aim of identifying good predictive baseline covariates for optimizin g the target population for a future study. The approach casts the population finding process as a formal decision problem together with a flexible probability model using a random forest to define a regression mean function. BaPoFi is constructed to handle a single continuous or binary outcome variable. In this paper, we develop BaPoFi-TTE as an extension of the earlier approach for clinically important cases of time-to-event (TTE) data with censoring, and also accounting for a toxicity outcome. We model the association of TTE data with baseline covariates using a semi-parametric failure time model with a Polya tree prior for an unknown error term and a random forest for a flexible regression mean function. We define a utility function that addresses a trade-off between efficacy and toxicity as one of the important clinical considerations for population finding. We examine the operating characteristics of the proposed method in extensive simulation studies. For illustration, we apply the proposed method to data from a randomized oncology clinical trial. Concerns in a preliminary analysis of the same data based on a parametric model motivated the proposed more general approach.
Just-in-time adaptive interventions (JITAIs) are time-varying adaptive interventions that use frequent opportunities for the intervention to be adapted--weekly, daily, or even many times a day. The micro-randomized trial (MRT) has emerged for use in informing the construction of JITAIs. MRTs can be used to address research questions about whether and under what circumstances JITAI components are effective, with the ultimate objective of developing effective and efficient JITAI. The purpose of this article is to clarify why, when, and how to use MRTs; to highlight elements that must be considered when designing and implementing an MRT; and to review primary and secondary analyses methods for MRTs. We briefly review key elements of JITAIs and discuss a variety of considerations that go into planning and designing an MRT. We provide a definition of causal excursion effects suitable for use in primary and secondary analyses of MRT data to inform JITAI development. We review the weighted and centered least-squares (WCLS) estimator which provides consistent causal excursion effect estimators from MRT data. We describe how the WCLS estimator along with associated test statistics can be obtained using standard statistical software such as R (R Core Team, 2019). Throughout we illustrate the MRT design and analyses using the HeartSteps MRT, for developing a JITAI to increase physical activity among sedentary individuals. We supplement the HeartSteps MRT with two other MRTs, SARA and BariFit, each of which highlights different research questions that can be addressed using the MRT and experimental design considerations that might arise.
134 - L. McCabe 2019
Background Achieving hepatitis C elimination is hampered by the costs of treatment and the need to treat hard-to-reach populations. Treatment access could be widened by shortening treatment, but limited research means it is unclear which strategies c ould achieve sufficiently high cure rates to be acceptable. We present the statistical aspects of a multi-arm trial designed to test multiple strategies simultaneously with a monitoring mechanism to detect and stop those with unacceptably low cure rates quickly. Methods The VIETNARMS trial will factorially randomise patients to three randomisations. We will use Bayesian monitoring at interim analyses to detect and stop recruitment into unsuccessful strategies, defined as a >0.95 posterior probability of the true cure rate being <90%. Here, we tested the operating characteristics of the stopping guideline, planned the timing of the interim analyses and explored power at the final analysis. Results A beta(4.5, 0.5) prior for the true cure rate produces <0.05 probability of incorrectly stopping a group with true cure rate >90%. Groups with very low cure rates (<60%) are very likely (>0.9 probability) to stop after ~25% patients are recruited. Groups with moderately low cure rates (80%) are likely to stop (0.7 probability) before the end of recruitment. Interim analyses 7, 10, 13 and 18 months after recruitment commences provide good probabilities of stopping inferior groups. For an overall true cure rate of 95%, power is >90% to detect non-inferiority in the regimen and strategy comparisons using 5% and 10% margins respectively, regardless of the control cure rate, and to detect a 5% absolute difference in the ribavirin comparison. Conclusions The operating characteristics of the stopping guideline are appropriate and interim analyses can be timed to detect failing groups at various stages.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا