Do you want to publish a course? Click here

Bayesian Singular Value Regularization via a Cumulative Shrinkage Process

59   0   0.0 ( 0 )
 Added by Masahiro Tanaka
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This study proposes a novel hierarchical prior for inferring possibly low-rank matrices measured with noise. We consider three-component matrix factorization, as in singular value decomposition, and its fully Bayesian inference. The proposed prior is specified by a scale mixture of exponential distributions that has spike and slab components. The weights for the spike/slab parts are inferred using a special prior based on a cumulative shrinkage process. The proposed prior is designed to increasingly aggressively push less important, or essentially redundant, singular values toward zero, leading to more accurate estimates of low-rank matrices. To ensure the parameter identification, we simulate posterior draws from an approximated posterior, in which the constraints are slightly relaxed, using a No-U-Turn sampler. By means of a set of simulation studies, we show that our proposal is competitive with alternative prior specifications and that it does not incur significant additional computational burden. We apply the proposed approach to sectoral industrial production in the United States to analyze the structural change during the Great Moderation period.

rate research

Read More

We develop singular value shrinkage priors for the mean matrix parameters in the matrix-variate normal model with known covariance matrices. Our priors are superharmonic and put more weight on matrices with smaller singular values. They are a natural generalization of the Stein prior. Bayes estimators and Bayesian predictive densities based on our priors are minimax and dominate those based on the uniform prior in finite samples. In particular, our priors work well when the true value of the parameter has low rank.
We study the problem of sparse signal detection on a spatial domain. We propose a novel approach to model continuous signals that are sparse and piecewise smooth as product of independent Gaussian processes (PING) with a smooth covariance kernel. The smoothness of the PING process is ensured by the smoothness of the covariance kernels of Gaussian components in the product, and sparsity is controlled by the number of components. The bivariate kurtosis of the PING process shows more components in the product results in thicker tail and sharper peak at zero. The simulation results demonstrate the improvement in estimation using the PING prior over Gaussian process (GP) prior for different image regressions. We apply our method to a longitudinal MRI dataset to detect the regions that are affected by multiple sclerosis (MS) in the greatest magnitude through an image-on-scalar regression model. Due to huge dimensionality of these images, we transform the data into the spectral domain and develop methods to conduct computation in this domain. In our MS imaging study, the estimates from the PING model are more informative than those from the GP model.
Modern deep neural networks (DNNs) often require high memory consumption and large computational loads. In order to deploy DNN algorithms efficiently on edge or mobile devices, a series of DNN compression algorithms have been explored, including factorization methods. Factorization methods approximate the weight matrix of a DNN layer with the multiplication of two or multiple low-rank matrices. However, it is hard to measure the ranks of DNN layers during the training process. Previous works mainly induce low-rank through implicit approximations or via costly singular value decomposition (SVD) process on every training step. The former approach usually induces a high accuracy loss while the latter has a low efficiency. In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step. SVD training first decomposes each layer into the form of its full-rank SVD, then performs training directly on the decomposed weights. We add orthogonality regularization to the singular vectors, which ensure the valid form of SVD and avoid gradient vanishing/exploding. Low-rank is encouraged by applying sparsity-inducing regularizers on the singular values of each layer. Singular value pruning is applied at the end to explicitly reach a low-rank model. We empirically show that SVD training can significantly reduce the rank of DNN layers and achieve higher reduction on computation load under the same accuracy, comparing to not only previous factorization methods but also state-of-the-art filter pruning methods.
We propose Dirichlet Process Mixture (DPM) models for prediction and cluster-wise variable selection, based on two choices of shrinkage baseline prior distributions for the linear regression coefficients, namely the Horseshoe prior and Normal-Gamma prior. We show in a simulation study that each of the two proposed DPM models tend to outperform the standard DPM model based on the non-shrinkage normal prior, in terms of predictive, variable selection, and clustering accuracy. This is especially true for the Horseshoe model, and when the number of covariates exceeds the within-cluster sample size. A real data set is analyzed to illustrate the proposed modeling methodology, where both proposed DPM models again attained better predictive accuracy.
This paper introduces the functional tensor singular value decomposition (FTSVD), a novel dimension reduction framework for tensors with one functional mode and several tabular modes. The problem is motivated by high-order longitudinal data analysis. Our model assumes the observed data to be a random realization of an approximate CP low-rank functional tensor measured on a discrete time grid. Incorporating tensor algebra and the theory of Reproducing Kernel Hilbert Space (RKHS), we propose a novel RKHS-based constrained power iteration with spectral initialization. Our method can successfully estimate both singular vectors and functions of the low-rank structure in the observed data. With mild assumptions, we establish the non-asymptotic contractive error bounds for the proposed algorithm. The superiority of the proposed framework is demonstrated via extensive experiments on both simulated and real data.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا