No Arabic abstract
Background Achieving hepatitis C elimination is hampered by the costs of treatment and the need to treat hard-to-reach populations. Treatment access could be widened by shortening treatment, but limited research means it is unclear which strategies could achieve sufficiently high cure rates to be acceptable. We present the statistical aspects of a multi-arm trial designed to test multiple strategies simultaneously with a monitoring mechanism to detect and stop those with unacceptably low cure rates quickly. Methods The VIETNARMS trial will factorially randomise patients to three randomisations. We will use Bayesian monitoring at interim analyses to detect and stop recruitment into unsuccessful strategies, defined as a >0.95 posterior probability of the true cure rate being <90%. Here, we tested the operating characteristics of the stopping guideline, planned the timing of the interim analyses and explored power at the final analysis. Results A beta(4.5, 0.5) prior for the true cure rate produces <0.05 probability of incorrectly stopping a group with true cure rate >90%. Groups with very low cure rates (<60%) are very likely (>0.9 probability) to stop after ~25% patients are recruited. Groups with moderately low cure rates (80%) are likely to stop (0.7 probability) before the end of recruitment. Interim analyses 7, 10, 13 and 18 months after recruitment commences provide good probabilities of stopping inferior groups. For an overall true cure rate of 95%, power is >90% to detect non-inferiority in the regimen and strategy comparisons using 5% and 10% margins respectively, regardless of the control cure rate, and to detect a 5% absolute difference in the ribavirin comparison. Conclusions The operating characteristics of the stopping guideline are appropriate and interim analyses can be timed to detect failing groups at various stages.
While difference-in-differences (DID) was originally developed with one pre- and one post-treatment periods, data from additional pre-treatment periods is often available. How can researchers improve the DID design with such multiple pre-treatment periods under what conditions? We first use potential outcomes to clarify three benefits of multiple pre-treatment periods: (1) assessing the parallel trends assumption, (2) improving estimation accuracy, and (3) allowing for a more flexible parallel trends assumption. We then propose a new estimator, double DID, which combines all the benefits through the generalized method of moments and contains the two-way fixed effects regression as a special case. In a wide range of applications where several pre-treatment periods are available, the double DID improves upon the standard DID both in terms of identification and estimation accuracy. We also generalize the double DID to the staggered adoption design where different units can receive the treatment in different time periods. We illustrate the proposed method with two empirical applications, covering both the basic DID and staggered adoption designs. We offer an open-source R package that implements the proposed methodologies.
Equipment sharing among people who inject drugs (PWID) is a key risk factor in infection by hepatitis C virus (HCV). Both the effectiveness and cost-effectiveness of interventions aimed at reducing HCV transmission in this population (such as opioid substitution therapy, needle exchange programs or improved treatment) are difficult to evaluate using field surveys. Ethical issues and complicated access to the PWID population make it difficult to gather epidemiological data. In this context, mathematical modelling of HCV transmission is a useful alternative for comparing the cost and effectiveness of various interventions. Several models have been developed in the past few years. They are often based on strong hypotheses concerning the population structure. This review presents compartmental and individual-based models in order to underline their strengths and limits in the context of HCV infection among PWID. The final section discusses the main results of the papers.
Adaptive interventions (AIs) are increasingly becoming popular in medical and behavioral sciences. An AI is a sequence of individualized intervention options that specify for whom and under what conditions different intervention options should be offered, in order to address the changing needs of individuals as they progress over time. The sequential, multiple assignment, randomized trial (SMART) is a novel trial design that was developed to aid in empirically constructing effective AIs. The sequential randomizations in a SMART often yield multiple AIs that are embedded in the trial by design. Many SMARTs are motivated by scientific questions pertaining to the comparison of such embedded AIs. Existing data analytic methods and sample size planning resources for SMARTs are suitable for superiority testing, namely for testing whether one embedded AI yields better primary outcomes on average than another. This represents a major scientific gap since AIs are often motivated by the need to deliver support/care in a less costly or less burdensome manner, while still yielding benefits that are equivalent or non-inferior to those produced by a more costly/burdensome standard of care. Here, we develop data analytic methods and sample size formulas for SMART studies aiming to test the non-inferiority or equivalence of one AI over another. Sample size and power considerations are discussed with supporting simulations, and online sample size planning resources are provided. For illustration, we use an example from a SMART study aiming to develop an AI for promoting weight loss among overweight/obese adults.
Stroke is a major cause of mortality and long--term disability in the world. Predictive outcome models in stroke are valuable for personalized treatment, rehabilitation planning and in controlled clinical trials. In this paper we design a new model to predict outcome in the short-term, the putative therapeutic window for several treatments. Our regression-based model has a parametric form that is designed to address many challenges common in medical datasets like highly correlated variables and class imbalance. Empirically our model outperforms the best--known previous models in predicting short--term outcomes and in inferring the most effective treatments that improve outcome.
Just-in-time adaptive interventions (JITAIs) are time-varying adaptive interventions that use frequent opportunities for the intervention to be adapted--weekly, daily, or even many times a day. The micro-randomized trial (MRT) has emerged for use in informing the construction of JITAIs. MRTs can be used to address research questions about whether and under what circumstances JITAI components are effective, with the ultimate objective of developing effective and efficient JITAI. The purpose of this article is to clarify why, when, and how to use MRTs; to highlight elements that must be considered when designing and implementing an MRT; and to review primary and secondary analyses methods for MRTs. We briefly review key elements of JITAIs and discuss a variety of considerations that go into planning and designing an MRT. We provide a definition of causal excursion effects suitable for use in primary and secondary analyses of MRT data to inform JITAI development. We review the weighted and centered least-squares (WCLS) estimator which provides consistent causal excursion effect estimators from MRT data. We describe how the WCLS estimator along with associated test statistics can be obtained using standard statistical software such as R (R Core Team, 2019). Throughout we illustrate the MRT design and analyses using the HeartSteps MRT, for developing a JITAI to increase physical activity among sedentary individuals. We supplement the HeartSteps MRT with two other MRTs, SARA and BariFit, each of which highlights different research questions that can be addressed using the MRT and experimental design considerations that might arise.