ﻻ يوجد ملخص باللغة العربية
We study the problem of sparse signal detection on a spatial domain. We propose a novel approach to model continuous signals that are sparse and piecewise smooth as product of independent Gaussian processes (PING) with a smooth covariance kernel. The smoothness of the PING process is ensured by the smoothness of the covariance kernels of Gaussian components in the product, and sparsity is controlled by the number of components. The bivariate kurtosis of the PING process shows more components in the product results in thicker tail and sharper peak at zero. The simulation results demonstrate the improvement in estimation using the PING prior over Gaussian process (GP) prior for different image regressions. We apply our method to a longitudinal MRI dataset to detect the regions that are affected by multiple sclerosis (MS) in the greatest magnitude through an image-on-scalar regression model. Due to huge dimensionality of these images, we transform the data into the spectral domain and develop methods to conduct computation in this domain. In our MS imaging study, the estimates from the PING model are more informative than those from the GP model.
We propose Dirichlet Process Mixture (DPM) models for prediction and cluster-wise variable selection, based on two choices of shrinkage baseline prior distributions for the linear regression coefficients, namely the Horseshoe prior and Normal-Gamma p
This study proposes a novel hierarchical prior for inferring possibly low-rank matrices measured with noise. We consider three-component matrix factorization, as in singular value decomposition, and its fully Bayesian inference. The proposed prior is
Spatial models are used in a variety research areas, such as environmental sciences, epidemiology, or physics. A common phenomenon in many spatial regression models is spatial confounding. This phenomenon takes place when spatially indexed covariates
Variational autoencoders (VAE) are a powerful and widely-used class of models to learn complex data distributions in an unsupervised fashion. One important limitation of VAEs is the prior assumption that latent sample representations are independent
Choosing a proper set of kernel functions is an important problem in learning Gaussian Process (GP) models since each kernel structure has different model complexity and data fitness. Recently, automatic kernel composition methods provide not only ac