Do you want to publish a course? Click here

Scalable Bayesian modeling for smoothing disease risks in large spatial data sets

66   0   0.0 ( 0 )
 Added by Aritz Adin
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Several methods have been proposed in the spatial statistics literature for the analysis of big data sets in continuous domains. However, new methods for analyzing high-dimensional areal data are still scarce. Here, we propose a scalable Bayesian modeling approach for smoothing mortality (or incidence) risks in high-dimensional data, that is, when the number of small areas is very large. The method is implemented in the R add-on package bigDM. Model fitting and inference is based on the idea of divide and conquer and use integrated nested Laplace approximations and numerical integration. We analyze the proposals empirical performance in a comprehensive simulation study that consider two model-free settings. Finally, the methodology is applied to analyze male colorectal cancer mortality in Spanish municipalities showing its benefits with regard to the standard approach in terms of goodness of fit and computational time.



rate research

Read More

Environmental processes resolved at a sufficiently small scale in space and time will inevitably display non-stationary behavior. Such processes are both challenging to model and computationally expensive when the data size is large. Instead of modeling the global non-stationarity explicitly, local models can be applied to disjoint regions of the domain. The choice of the size of these regions is dictated by a bias-variance trade-off; large regions will have smaller variance and larger bias, whereas small regions will have higher variance and smaller bias. From both the modeling and computational point of view, small regions are preferable to better accommodate the non-stationarity. However, in practice, large regions are necessary to control the variance. We propose a novel Bayesian three-step approach that allows for smaller regions without compromising the increase of the variance that would follow. We are able to propagate the uncertainty from one step to the next without issues caused by reusing the data. The improvement in inference also results in improved prediction, as our simulated example shows. We illustrate this new approach on a data set of simulated high-resolution wind speed data over Saudi Arabia.
In spatial statistics, it is often assumed that the spatial field of interest is stationary and its covariance has a simple parametric form, but these assumptions are not appropriate in many applications. Given replicate observations of a Gaussian spatial field, we propose nonstationary and nonparametric Bayesian inference on the spatial dependence. Instead of estimating the quadratic (in the number of spatial locations) entries of the covariance matrix, the idea is to infer a near-linear number of nonzero entries in a sparse Cholesky factor of the precision matrix. Our prior assumptions are motivated by recent results on the exponential decay of the entries of this Cholesky factor for Matern-type covariances under a specific ordering scheme. Our methods are highly scalable and parallelizable. We conduct numerical comparisons and apply our methodology to climate-model output, enabling statistical emulation of an expensive physical model.
181 - Xi Chen , Kaoru Irie , David Banks 2016
Traffic flow count data in networks arise in many applications, such as automobile or aviation transportation, certain directed social network contexts, and Internet studies. Using an example of Internet browser traffic flow through site-segments of an international news website, we present Bayesian analyses of two linked classes of models which, in tandem, allow fast, scalable and interpretable Bayesian inference. We first develop flexible state-space models for streaming count data, able to adaptively characterize and quantify network dynamics efficiently in real-time. We then use these models as emulators of more structured, time-varying gravity models that allow formal dissection of network dynamics. This yields interpretable inferences on traffic flow characteristics, and on dynamics in interactions among network nodes. Bayesian monitoring theory defines a strategy for sequential model assessment and adaptation in cases when network flow data deviates from model-based predictions. Exploratory and sequential monitoring analyses of evolving traffic on a network of web site-segments in e-commerce demonstrate the utility of this coupled Bayesian emulation approach to analysis of streaming network count data.
We propose a framework for Bayesian non-parametric estimation of the rate at which new infections occur assuming that the epidemic is partially observed. The developed methodology relies on modelling the rate at which new infections occur as a function which only depends on time. Two different types of prior distributions are proposed namely using step-functions and B-splines. The methodology is illustrated using both simulated and real datasets and we show that certain aspects of the epidemic such as seasonality and super-spreading events are picked up without having to explicitly incorporate them into a parametric model.
We study possible relations between the structure of the connectome, white matter connecting different regions of brain, and Alzheimer disease. Regression models in covariates including age, gender and disease status for the extent of white matter connecting each pair of regions of brain are proposed. Subject We study possible relations between the Alzheimers disease progression and the structure of the connectome, white matter connecting different regions of brain. Regression models in covariates including age, gender and disease status for the extent of white matter connecting each pair of regions of brain are proposed. Subject inhomogeneity is also incorporated in the model through random effects with an unknown distribution. As there are large number of pairs of regions, we also adopt a dimension reduction technique through graphon (Lovasz and Szegedy (2006)) functions, which reduces functions of pairs of regions to functions of regions. The connecting graphon functions are considered unknown but assumed smoothness allows putting priors of low complexity on them. We pursue a nonparametric Bayesian approach by assigning a Dirichlet process scale mixture of zero mean normal prior on the distributions of the random effects and finite random series of tensor products of B-splines priors on the underlying graphon functions. Markov chain Monte Carlo techniques, for drawing samples for the posterior distributions are developed. The proposed Bayesian method overwhelmingly outperforms similar ANCOVA models in the simulation setup. The proposed Bayesian approach is applied on a dataset of 100 subjects and 83 brain regions and key regions implicated in the changing connectome are identified.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا