No Arabic abstract
Just-in-time adaptive interventions (JITAIs) are time-varying adaptive interventions that use frequent opportunities for the intervention to be adapted--weekly, daily, or even many times a day. The micro-randomized trial (MRT) has emerged for use in informing the construction of JITAIs. MRTs can be used to address research questions about whether and under what circumstances JITAI components are effective, with the ultimate objective of developing effective and efficient JITAI. The purpose of this article is to clarify why, when, and how to use MRTs; to highlight elements that must be considered when designing and implementing an MRT; and to review primary and secondary analyses methods for MRTs. We briefly review key elements of JITAIs and discuss a variety of considerations that go into planning and designing an MRT. We provide a definition of causal excursion effects suitable for use in primary and secondary analyses of MRT data to inform JITAI development. We review the weighted and centered least-squares (WCLS) estimator which provides consistent causal excursion effect estimators from MRT data. We describe how the WCLS estimator along with associated test statistics can be obtained using standard statistical software such as R (R Core Team, 2019). Throughout we illustrate the MRT design and analyses using the HeartSteps MRT, for developing a JITAI to increase physical activity among sedentary individuals. We supplement the HeartSteps MRT with two other MRTs, SARA and BariFit, each of which highlights different research questions that can be addressed using the MRT and experimental design considerations that might arise.
Just-in-time adaptive interventions (JITAIs) are time-varying adaptive interventions that use frequent opportunities for the intervention to be adapted such as weekly, daily, or even many times a day. This high intensity of adaptation is facilitated by the ability of digital technology to continuously collect information about an individuals current context and deliver treatments adapted to this information. The micro-randomized trial (MRT) has emerged for use in informing the construction of JITAIs. MRTs operate in, and take advantage of, the rapidly time-varying digital intervention environment. MRTs can be used to address research questions about whether and under what circumstances particular components of a JITAI are effective, with the ultimate objective of developing effective and efficient components. The purpose of this article is to clarify why, when, and how to use MRTs; to highlight elements that must be considered when designing and implementing an MRT; and to discuss the possibilities this emerging optimization trial design offers for future research in the behavioral sciences, education, and other fields. We briefly review key elements of JITAIs, and then describe three case studies of MRTs, each of which highlights research questions that can be addressed using the MRT and experimental design considerations that might arise. We also discuss a variety of considerations that go into planning and designing an MRT, using the case studies as examples.
Developing spatio-temporal crime prediction models, and to a lesser extent, developing measures of accuracy and operational efficiency for them, has been an active area of research for almost two decades. Despite calls for rigorous and independent evaluations of model performance, such studies have been few and far between. In this paper, we argue that studies should focus not on finding the one predictive model or the one measure that is the most appropriate at all times, but instead on careful consideration of several factors that affect the choice of the model and the choice of the measure, to find the best measure and the best model for the problem at hand. We argue that because each problem is unique, it is important to develop measures that empower the practitioner with the ability to input the choices and preferences that are most appropriate for the problem at hand. We develop a new measure called the penalized predictive accuracy index (PPAI) which imparts such flexibility. We also propose the use of the expected utility function to combine multiple measures in a way that is appropriate for a given problem in order to assess the models against multiple criteria. We further propose the use of the average logarithmic score (ALS) measure that is appropriate for many crime models and measures accuracy differently than existing measures. These measures can be used alongside existing measures to provide a more comprehensive means of assessing the accuracy and potential utility of spatio-temporal crime prediction models.
Adaptive interventions (AIs) are increasingly becoming popular in medical and behavioral sciences. An AI is a sequence of individualized intervention options that specify for whom and under what conditions different intervention options should be offered, in order to address the changing needs of individuals as they progress over time. The sequential, multiple assignment, randomized trial (SMART) is a novel trial design that was developed to aid in empirically constructing effective AIs. The sequential randomizations in a SMART often yield multiple AIs that are embedded in the trial by design. Many SMARTs are motivated by scientific questions pertaining to the comparison of such embedded AIs. Existing data analytic methods and sample size planning resources for SMARTs are suitable for superiority testing, namely for testing whether one embedded AI yields better primary outcomes on average than another. This represents a major scientific gap since AIs are often motivated by the need to deliver support/care in a less costly or less burdensome manner, while still yielding benefits that are equivalent or non-inferior to those produced by a more costly/burdensome standard of care. Here, we develop data analytic methods and sample size formulas for SMART studies aiming to test the non-inferiority or equivalence of one AI over another. Sample size and power considerations are discussed with supporting simulations, and online sample size planning resources are provided. For illustration, we use an example from a SMART study aiming to develop an AI for promoting weight loss among overweight/obese adults.
Spatial prediction of weather-elements like temperature, precipitation, and barometric pressure are generally based on satellite imagery or data collected at ground-stations. None of these data provide information at a more granular or hyper-local resolution. On the other hand, crowdsourced weather data, which are captured by sensors installed on mobile devices and gathered by weather-related mobile apps like WeatherSignal and AccuWeather, can serve as potential data sources for analyzing environmental processes at a hyper-local resolution. However, due to the low quality of the sensors and the non-laboratory environment, the quality of the observations in crowdsourced data is compromised. This paper describes methods to improve hyper-local spatial prediction using this varying-quality noisy crowdsourced information. We introduce a reliability metric, namely Veracity Score (VS), to assess the quality of the crowdsourced observations using a coarser, but high-quality, reference data. A VS-based methodology to analyze noisy spatial data is proposed and evaluated through extensive simulations. The merits of the proposed approach are illustrated through case studies analyzing crowdsourced daily average ambient temperature readings for one day in the contiguous United States.
A utility-based Bayesian population finding (BaPoFi) method was proposed by Morita and Muller (2017, Biometrics, 1355-1365) to analyze data from a randomized clinical trial with the aim of identifying good predictive baseline covariates for optimizing the target population for a future study. The approach casts the population finding process as a formal decision problem together with a flexible probability model using a random forest to define a regression mean function. BaPoFi is constructed to handle a single continuous or binary outcome variable. In this paper, we develop BaPoFi-TTE as an extension of the earlier approach for clinically important cases of time-to-event (TTE) data with censoring, and also accounting for a toxicity outcome. We model the association of TTE data with baseline covariates using a semi-parametric failure time model with a Polya tree prior for an unknown error term and a random forest for a flexible regression mean function. We define a utility function that addresses a trade-off between efficacy and toxicity as one of the important clinical considerations for population finding. We examine the operating characteristics of the proposed method in extensive simulation studies. For illustration, we apply the proposed method to data from a randomized oncology clinical trial. Concerns in a preliminary analysis of the same data based on a parametric model motivated the proposed more general approach.