ترغب بنشر مسار تعليمي؟ اضغط هنا

Spatial shrinkage via the product independent Gaussian process prior

208   0   0.0 ( 0 )
 نشر من قبل Arkaprava Roy
 تاريخ النشر 2018
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the problem of sparse signal detection on a spatial domain. We propose a novel approach to model continuous signals that are sparse and piecewise smooth as product of independent Gaussian processes (PING) with a smooth covariance kernel. The smoothness of the PING process is ensured by the smoothness of the covariance kernels of Gaussian components in the product, and sparsity is controlled by the number of components. The bivariate kurtosis of the PING process shows more components in the product results in thicker tail and sharper peak at zero. The simulation results demonstrate the improvement in estimation using the PING prior over Gaussian process (GP) prior for different image regressions. We apply our method to a longitudinal MRI dataset to detect the regions that are affected by multiple sclerosis (MS) in the greatest magnitude through an image-on-scalar regression model. Due to huge dimensionality of these images, we transform the data into the spectral domain and develop methods to conduct computation in this domain. In our MS imaging study, the estimates from the PING model are more informative than those from the GP model.



قيم البحث

اقرأ أيضاً

We propose Dirichlet Process Mixture (DPM) models for prediction and cluster-wise variable selection, based on two choices of shrinkage baseline prior distributions for the linear regression coefficients, namely the Horseshoe prior and Normal-Gamma p rior. We show in a simulation study that each of the two proposed DPM models tend to outperform the standard DPM model based on the non-shrinkage normal prior, in terms of predictive, variable selection, and clustering accuracy. This is especially true for the Horseshoe model, and when the number of covariates exceeds the within-cluster sample size. A real data set is analyzed to illustrate the proposed modeling methodology, where both proposed DPM models again attained better predictive accuracy.
58 - Masahiro Tanaka 2020
This study proposes a novel hierarchical prior for inferring possibly low-rank matrices measured with noise. We consider three-component matrix factorization, as in singular value decomposition, and its fully Bayesian inference. The proposed prior is specified by a scale mixture of exponential distributions that has spike and slab components. The weights for the spike/slab parts are inferred using a special prior based on a cumulative shrinkage process. The proposed prior is designed to increasingly aggressively push less important, or essentially redundant, singular values toward zero, leading to more accurate estimates of low-rank matrices. To ensure the parameter identification, we simulate posterior draws from an approximated posterior, in which the constraints are slightly relaxed, using a No-U-Turn sampler. By means of a set of simulation studies, we show that our proposal is competitive with alternative prior specifications and that it does not incur significant additional computational burden. We apply the proposed approach to sectoral industrial production in the United States to analyze the structural change during the Great Moderation period.
Spatial models are used in a variety research areas, such as environmental sciences, epidemiology, or physics. A common phenomenon in many spatial regression models is spatial confounding. This phenomenon takes place when spatially indexed covariates modeling the mean of the response are correlated with the spatial random effect. As a result, estimates for regression coefficients of the covariates can be severely biased and interpretation of these is no longer valid. Recent literature has shown that typical solutions for reducing spatial confounding can lead to misleading and counterintuitive results. In this paper, we develop a computationally efficient spatial model in a Bayesian framework integrating novel prior structure that reduces spatial confounding. Starting from the univariate case, we extend our prior structure to case of multiple spatially confounded covariates. In a simulation study, we show that our novel model flexibly detects and reduces spatial confounding in spatial datasets, and it performs better than typically used methods such as restricted spatial regression. These results are promising for any applied researcher who wishes to interpret covariate effects in spatial regression models. As a real data illustration, we study the effect of elevation and temperature on the mean of daily precipitation in Germany.
Variational autoencoders (VAE) are a powerful and widely-used class of models to learn complex data distributions in an unsupervised fashion. One important limitation of VAEs is the prior assumption that latent sample representations are independent and identically distributed. However, for many important datasets, such as time-series of images, this assumption is too strong: accounting for covariances between samples, such as those in time, can yield to a more appropriate model specification and improve performance in downstream tasks. In this work, we introduce a new model, the Gaussian Process (GP) Prior Variational Autoencoder (GPPVAE), to specifically address this issue. The GPPVAE aims to combine the power of VAEs with the ability to model correlations afforded by GP priors. To achieve efficient inference in this new class of models, we leverage structure in the covariance matrix, and introduce a new stochastic backpropagation strategy that allows for computing stochastic gradients in a distributed and low-memory fashion. We show that our method outperforms conditional VAEs (CVAEs) and an adaptation of standard VAEs in two image data applications.
114 - Anh Tong , Toan Tran , Hung Bui 2020
Choosing a proper set of kernel functions is an important problem in learning Gaussian Process (GP) models since each kernel structure has different model complexity and data fitness. Recently, automatic kernel composition methods provide not only ac curate prediction but also attractive interpretability through search-based methods. However, existing methods suffer from slow kernel composition learning. To tackle large-scaled data, we propose a new sparse approximate posterior for GPs, MultiSVGP, constructed from groups of inducing points associated with individual additive kernels in compositional kernels. We demonstrate that this approximation provides a better fit to learn compositional kernels given empirical observations. We also provide theoretically justification on error bound when compared to the traditional sparse GP. In contrast to the search-based approach, we present a novel probabilistic algorithm to learn a kernel composition by handling the sparsity in the kernel selection with Horseshoe prior. We demonstrate that our model can capture characteristics of time series with significant reductions in computational time and have competitive regression performance on real-world data sets.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا