Non-Inferiority and Equivalence Tests in A Sequential Multiple-Assignment Randomized Trial (SMART)


Abstract in English

Adaptive interventions (AIs) are increasingly becoming popular in medical and behavioral sciences. An AI is a sequence of individualized intervention options that specify for whom and under what conditions different intervention options should be offered, in order to address the changing needs of individuals as they progress over time. The sequential, multiple assignment, randomized trial (SMART) is a novel trial design that was developed to aid in empirically constructing effective AIs. The sequential randomizations in a SMART often yield multiple AIs that are embedded in the trial by design. Many SMARTs are motivated by scientific questions pertaining to the comparison of such embedded AIs. Existing data analytic methods and sample size planning resources for SMARTs are suitable for superiority testing, namely for testing whether one embedded AI yields better primary outcomes on average than another. This represents a major scientific gap since AIs are often motivated by the need to deliver support/care in a less costly or less burdensome manner, while still yielding benefits that are equivalent or non-inferior to those produced by a more costly/burdensome standard of care. Here, we develop data analytic methods and sample size formulas for SMART studies aiming to test the non-inferiority or equivalence of one AI over another. Sample size and power considerations are discussed with supporting simulations, and online sample size planning resources are provided. For illustration, we use an example from a SMART study aiming to develop an AI for promoting weight loss among overweight/obese adults.

Download