Do you want to publish a course? Click here

Approximate Method of Variational Bayesian Matrix Factorization/Completion with Sparse Prior

339   0   0.0 ( 0 )
 Added by Koujin Takeda
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

We derive analytical expression of matrix factorization/completion solution by variational Bayes method, under the assumption that observed matrix is originally the product of low-rank dense and sparse matrices with additive noise. We assume the prior of sparse matrix is Laplace distribution by taking matrix sparsity into consideration. Then we use several approximations for derivation of matrix factorization/completion solution. By our solution, we also numerically evaluate the performance of sparse matrix reconstruction in matrix factorization, and completion of missing matrix element in matrix completion.



rate research

Read More

178 - Man Luo , Qinghua Guo , Ming Jin 2021
Sparse Bayesian learning (SBL) can be implemented with low complexity based on the approximate message passing (AMP) algorithm. However, it does not work well for a generic measurement matrix, which may cause AMP to diverge. Damped AMP has been used for SBL to alleviate the problem at the cost of reducing convergence speed. In this work, we propose a new SBL algorithm based on structured variational inference, leveraging AMP with a unitary transformation (UAMP). Both single measurement vector and multiple measurement vector problems are investigated. It is shown that, compared to state-of-the-art AMP-based SBL algorithms, the proposed UAMP-SBL is more robust and efficient, leading to remarkably better performance.
Variational dropout (VD) is a generalization of Gaussian dropout, which aims at inferring the posterior of network weights based on a log-uniform prior on them to learn these weights as well as dropout rate simultaneously. The log-uniform prior not only interprets the regularization capacity of Gaussian dropout in network training, but also underpins the inference of such posterior. However, the log-uniform prior is an improper prior (i.e., its integral is infinite) which causes the inference of posterior to be ill-posed, thus restricting the regularization performance of VD. To address this problem, we present a new generalization of Gaussian dropout, termed variational Bayesian dropout (VBD), which turns to exploit a hierarchical prior on the network weights and infer a new joint posterior. Specifically, we implement the hierarchical prior as a zero-mean Gaussian distribution with variance sampled from a uniform hyper-prior. Then, we incorporate such a prior into inferring the joint posterior over network weights and the variance in the hierarchical prior, with which both the network training and the dropout rate estimation can be cast into a joint optimization problem. More importantly, the hierarchical prior is a proper prior which enables the inference of posterior to be well-posed. In addition, we further show that the proposed VBD can be seamlessly applied to network compression. Experiments on both classification and network compression tasks demonstrate the superior performance of the proposed VBD in terms of regularizing network training.
Exponential is a basic signal form, and how to fast acquire this signal is one of the fundamental problems and frontiers in signal processing. To achieve this goal, partial data may be acquired but result in the severe artifacts in its spectrum, which is the Fourier transform of exponentials. Thus, reliable spectrum reconstruction is highly expected in the fast sampling in many applications, such as chemistry, biology, and medical imaging. In this work, we propose a deep learning method whose neural network structure is designed by unrolling the iterative process in the model-based state-of-the-art exponentials reconstruction method with low-rank Hankel matrix factorization. With the experiments on synthetic data and realistic biological magnetic resonance signals, we demonstrate that the new method yields much lower reconstruction errors and preserves the low-intensity signals much better.
This work considers variational Bayesian inference as an inexpensive and scalable alternative to a fully Bayesian approach in the context of sparsity-promoting priors. In particular, the priors considered arise from scale mixtures of Normal distributions with a generalized inverse Gaussian mixing distribution. This includes the variational Bayesian LASSO as an inexpensive and scalable alternative to the Bayesian LASSO introduced in [56]. It also includes priors which more strongly promote sparsity. For linear models the method requires only the iterative solution of deterministic least squares problems. Furthermore, for $nrightarrow infty$ data points and p unknown covariates the method can be implemented exactly online with a cost of O(p$^3$) in computation and O(p$^2$) in memory. For large p an approximation is able to achieve promising results for a cost of O(p) in both computation and memory. Strategies for hyper-parameter tuning are also considered. The method is implemented for real and simulated data. It is shown that the performance in terms of variable selection and uncertainty quantification of the variational Bayesian LASSO can be comparable to the Bayesian LASSO for problems which are tractable with that method, and for a fraction of the cost. The present method comfortably handles n = p = 131,073 on a laptop in minutes, and n = 10$^5$, p = 10$^6$ overnight.
We propose a method for solving statistical mechanics problems defined on sparse graphs. It extracts a small Feedback Vertex Set (FVS) from the sparse graph, converting the sparse system to a much smaller system with many-body and dense interactions with an effective energy on every configuration of the FVS, then learns a variational distribution parameterized using neural networks to approximate the original Boltzmann distribution. The method is able to estimate free energy, compute observables, and generate unbiased samples via direct sampling without auto-correlation. Extensive experiments show that our approach is more accurate than existing approaches for sparse spin glasses. On random graphs and real-world networks, our approach significantly outperforms the standard methods for sparse systems such as the belief-propagation algorithm; on structured sparse systems such as two-dimensional lattices our approach is significantly faster and more accurate than recently proposed variational autoregressive networks using convolution neural networks.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا