No Arabic abstract
Multiple systems estimation is a key approach for quantifying hidden populations such as the number of victims of modern slavery. The UK Government published an estimate of 10,000 to 13,000 victims, constructed by the present author, as part of the strategy leading to the Modern Slavery Act 2015. This estimate was obtained by a stepwise multiple systems method based on six lists. Further investigation shows that a small proportion of the possible models give rather different answers, and that other model fitting approaches may choose one of these. Three data sets collected in the field of modern slavery, together with a data set about the death toll in the Kosovo conflict, are used to investigate the stability and robustness of various multiple systems estimate methods. The crucial aspect is the way that interactions between lists are modelled, because these can substantially affect the results. Model selection and Bayesian approaches are considered in detail, in particular to assess their stability and robustness when applied to real modern slavery data. A new Markov Chain Monte Carlo Bayesian approach is developed; overall, this gives robust and stable results at least for the examples considered. The software and datasets are freely and publicly available to facilitate wider implementation and further research.
Studying the determinants of adverse pregnancy outcomes like stillbirth and preterm birth is of considerable interest in epidemiology. Understanding the role of both individual and community risk factors for these outcomes is crucial for planning appropriate clinical and public health interventions. With this goal, we develop geospatial mixed effects logistic regression models for adverse pregnancy outcomes. Our models account for both spatial autocorrelation and heterogeneity between neighborhoods. To mitigate the low incidence of stillbirth and preterm births in our data, we explore using class rebalancing techniques to improve predictive power. To assess the informative value of the covariates in our models, we use posterior distributions of their coefficients to gauge how well they can be distinguished from zero. As a case study, we model stillbirth and preterm birth in the city of Philadelphia, incorporating both patient-level data from electronic health records (EHR) data and publicly available neighborhood data at the census tract level. We find that patient-level features like self-identified race and ethnicity were highly informative for both outcomes. Neighborhood-level factors were also informative, with poverty important for stillbirth and crime important for preterm birth. Finally, we identify the neighborhoods in Philadelphia at highest risk of stillbirth and preterm birth.
Preventing periodontal diseases (PD) and maintaining the structure and function of teeth are important goals for personal oral care. To understand the heterogeneity in patients with diverse PD patterns, we develop BAREB, a Bayesian repulsive biclustering method that can simultaneously cluster the PD patients and their tooth sites after taking the patient- and site- level covariates into consideration. BAREB uses the determinantal point process (DPP) prior to induce diversity among different biclusters to facilitate parsimony and interpretability. Since PD progression is hypothesized to be spatially-referenced, BAREB factors in the spatial dependence among tooth sites. In addition, since PD is the leading cause for tooth loss, the missing data mechanism is non-ignorable. Such nonrandom missingness is incorporated into BAREB. For the posterior inference, we design an efficient reversible jump Markov chain Monte Carlo sampler. Simulation studies show that BAREB is able to accurately estimate the biclusters, and compares favorably to alternatives. For real world application, we apply BAREB to a dataset from a clinical PD study, and obtain desirable and interpretable results. A major contribution of this paper is the Rcpp implementation of BAREB, available at https://github.com/YanxunXu/ BAREB.
When a latent shoeprint is discovered at a crime scene, forensic analysts inspect it for distinctive patterns of wear such as scratches and holes (known as accidentals) on the source shoes sole. If its accidentals correspond to those of a suspects shoe, the print can be used as forensic evidence to place the suspect at the crime scene. The strength of this evidence depends on the random match probability---the chance that a shoe chosen at random would match the crime scene prints accidentals. Evaluating random match probabilities requires an accurate model for the spatial distribution of accidentals on shoe soles. A recent report by the Presidents Council of Advisors in Science and Technology criticized existing models in the literature, calling for new empirically validated techniques. We respond to this request with a new spatial point process model for accidental locations, developed within a hierarchical Bayesian framework. We treat the tread pattern of each shoe as a covariate, allowing us to pool information across large heterogeneous databases of shoes. Existing models ignore this information; our results show that including it leads to significantly better model fit. We demonstrate this by fitting our model to one such database.
Causal mediation analysis is used to evaluate direct and indirect causal effects of a treatment on an outcome of interest through an intermediate variable or a mediator.It is difficult to identify the direct and indirect causal effects because the mediator cannot be randomly assigned in many real applications. In this article, we consider a causal model including latent confounders between the mediator and the outcome. We present sufficient conditions for identifying the direct and indirect effects and propose an approach for estimating them. The performance of the proposed approach is evaluated by simulation studies. Finally, we apply the approach to a data set of the customer loyalty survey by a telecom company.
In this paper, we study porous media flows in heterogeneous stochastic media. We propose an efficient forward simulation technique that is tailored for variational Bayesian inversion. As a starting point, the proposed forward simulation technique decomposes the solution into the sum of separable functions (with respect to randomness and the space), where each term is calculated based on a variational approach. This is similar to Proper Generalized Decomposition (PGD). Next, we apply a multiscale technique to solve for each term and, further, decompose the random function into 1D fields. As a result, our proposed method provides an approximation hierarchy for the solution as we increase the number of terms in the expansion and, also, increase the spatial resolution of each term. We use the hierarchical solution distributions in a variational Bayesian approximation to perform uncertainty quantification in the inverse problem. We conduct a detailed numerical study to explore the performance of the proposed uncertainty quantification technique and show the theoretical posterior concentration.