No Arabic abstract
Background: All states in the US have enacted at least some naloxone access laws (NALs) in an effort to reduce opioid overdose lethality. Previous evaluations found NALs increased naloxone dispensing but showed mixed results in terms of opioid overdose mortality. One reason for mixed results could be failure to address violations of the positivity assumption caused by the co-occurrence of NAL enactment with enactment of related laws, ultimately resulting in bias, increased variance, and low statistical power. Methods: We reformulated the research question to alleviate some challenges related to law co-occurrence. Because NAL enactment was closely correlated with Good Samaritan Law (GSL) enactment, we bundled NAL with GSL, and estimated the hypothetical associations of enacting NAL/GSL up to 2 years earlier (an amount supported by the observed data) on naloxone dispensation and opioid overdose mortality. Results: We estimated that such a shift in NAL/GSL duration would have been associated with increased naloxone dispensations (0.28 dispensations/100,000 people (95% CI: 0.18-0.38) in 2013 among early NAL/GSL enactors; 47.58 (95% CI: 28.40-66.76) in 2018 among late enactors). We estimated that such a shift would have been associated with increased opioid overdose mortality (1.88 deaths/100,000 people (95% CI: 1.03-2.72) in 2013; 2.06 (95% CI: 0.92-3.21) in 2018). Conclusions: Consistent with prior research, increased duration of NAL/GSL enactment was associated with increased naloxone dispensing. Contrary to expectation, we did not find a protective association with opioid overdose morality, though residual bias due to unobserved confounding and interference likely remain.
Is it possible for a large sequence of measurements or observations, which support a hypothesis, to counterintuitively decrease our confidence? Can unanimous support be too good to be true? The assumption of independence is often made in good faith, however rarely is consideration given to whether a systemic failure has occurred. Taking this into account can cause certainty in a hypothesis to decrease as the evidence for it becomes apparently stronger. We perform a probabilistic Bayesian analysis of this effect with examples based on (i) archaeological evidence, (ii) weighing of legal evidence, and (iii) cryptographic primality testing. We find that even with surprisingly low systemic failure rates high confidence is very difficult to achieve and in particular we find that certain analyses of cryptographically-important numerical tests are highly optimistic, underestimating their false-negative rate by as much as a factor of $2^{80}$.
Finding translational biomarkers stands center stage of the future of personalized medicine in healthcare. We observed notable challenges in identifying robust biomarkers as some with great performance in one scenario often fail to perform well in new trials (e.g. different population, indications). With rapid development in the clinical trial world (e.g. assay, disease definition), new trials very likely differ from legacy ones in many perspectives and in development of biomarkers this heterogeneity should be considered. In response, we recommend considering building in the heterogeneity when evaluating biomarkers. In this paper, we present one evaluation strategy by using leave-one-study-out (LOSO) in place of conventional cross-validation (cv) methods to account for the potential heterogeneity across trials used for building and testing the biomarkers. To demonstrate the performance of K-fold vs LOSO cv in estimating the effect size of biomarkers, we leveraged data from clinical trials and simulation studies. In our assessment, LOSO cv provided a more objective estimate of the future performance. This conclusion remained true across different evaluation metrics and different statistical methods.
Recent observational results indicate that the functional shape of the spatially-resolved star formation-molecular gas density relation depends on the spatial scale considered. These results may indicate a fundamental role of sampling effects on scales that are typically only a few times larger than those of the largest molecular clouds. To investigate the impact of this effect, we construct simple models for the distribution of molecular clouds in a typical star-forming spiral galaxy, and, assuming a power-law relation between SFR and cloud mass, explore a range of input parameters. We confirm that the slope and the scatter of the simulated SFR-molecular gas surface density relation depend on the size of the sub-galactic region considered, due to stochastic sampling of the molecular cloud mass function, and the effect is larger for steeper relations between SFR and molecular gas. There is a general trend for all slope values to tend to ~unity for region sizes larger than 1-2 kpc, irrespective of the input SFR-cloud relation. The region size of 1-2 kpc corresponds to the area where the cloud mass function becomes fully sampled. We quantify the effects of selection biases in data tracing the SFR, either as thresholds (i.e., clouds smaller than a given mass value do not form stars) or backgrounds (e.g., diffuse emission unrelated to current star formation is counted towards the SFR). Apparently discordant observational results are brought into agreement via this simple model, and the comparison of our simulations with data for a few galaxies supports a steep (>1) power law index between SFR and molecular gas.
We present current methods for estimating treatment effects and spillover effects under interference, a term which covers a broad class of situations in which a units outcome depends not only on treatments received by that unit, but also on treatments received by other units. To the extent that units react to each other, interact, or otherwise transmit effects of treatments, valid inference requires that we account for such interference, which is a departure from the traditional assumption that units outcomes are affected only by their own treatment assignment. Interference and associated spillovers may be a nuisance or they may be of substantive interest to the researcher. In this chapter, we focus on interference in the context of randomized experiments. We review methods for when interference happens in a general network setting. We then consider the special case where interference is contained within a hierarchical structure. Finally, we discuss the relationship between interference and contagion. We use the interference R package and simulated data to illustrate key points. We consider efficient designs that allow for estimation of the treatment and spillover effects and discuss recent empirical studies that try to capture such effects.
We prove that no fully transactional system can provide fast read transactions (including read-only ones that are considered the most frequent in practice). Specifically, to achieve fast read transactions, the system has to give up support of transactions that write more than one object. We prove this impossibility result for distributed storage systems that are causally consistent, i.e., they do not require to ensure any strong form of consistency. Therefore, our result holds also for any system that ensures a consistency level stronger than causal consistency, e.g., strict serializability. The impossibility result holds even for systems that store only two objects (and support at least two servers and at least four clients). It also holds for systems that are partially replicated. Our result justifies the design choices of state-of-the-art distributed transactional systems and insists that system designers should not put more effort to design fully-functional systems that support both fast read transactions and ensure causal or any stronger form of consistency.