Do you want to publish a course? Click here

A Fast Linear Regression via SVD and Marginalization

136   0   0.0 ( 0 )
 Added by Philip Greengard
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We describe a numerical scheme for evaluating the posterior moments of Bayesian linear regression models with partial pooling of the coefficients. The principal analytical tool of the evaluation is a change of basis from coefficient space to the space of singular vectors of the matrix of predictors. After this change of basis and an analytical integration, we reduce the problem of finding moments of a density over k + m dimensions, to finding moments of an m-dimensional density, where k is the number of coefficients and k + m is the dimension of the posterior. Moments can then be computed using, for example, MCMC, the trapezoid rule, or adaptive Gaussian quadrature. An evaluation of the SVD of the matrix of predictors is the dominant computational cost and is performed once during the precomputation stage. We demonstrate numerical results of the algorithm. The scheme described in this paper generalizes naturally to multilevel and multi-group hierarchical regression models where normal-normal parameters appear.



rate research

Read More

This work considers variational Bayesian inference as an inexpensive and scalable alternative to a fully Bayesian approach in the context of sparsity-promoting priors. In particular, the priors considered arise from scale mixtures of Normal distributions with a generalized inverse Gaussian mixing distribution. This includes the variational Bayesian LASSO as an inexpensive and scalable alternative to the Bayesian LASSO introduced in [56]. It also includes priors which more strongly promote sparsity. For linear models the method requires only the iterative solution of deterministic least squares problems. Furthermore, for $nrightarrow infty$ data points and p unknown covariates the method can be implemented exactly online with a cost of O(p$^3$) in computation and O(p$^2$) in memory. For large p an approximation is able to achieve promising results for a cost of O(p) in both computation and memory. Strategies for hyper-parameter tuning are also considered. The method is implemented for real and simulated data. It is shown that the performance in terms of variable selection and uncertainty quantification of the variational Bayesian LASSO can be comparable to the Bayesian LASSO for problems which are tractable with that method, and for a fraction of the cost. The present method comfortably handles n = p = 131,073 on a laptop in minutes, and n = 10$^5$, p = 10$^6$ overnight.
184 - Yu Tong , Dong An , Nathan Wiebe 2020
Preconditioning is the most widely used and effective way for treating ill-conditioned linear systems in the context of classical iterative linear system solvers. We introduce a quantum primitive called fast inversion, which can be used as a preconditioner for solving quantum linear systems. The key idea of fast inversion is to directly block-encode a matrix inverse through a quantum circuit implementing the inversion of eigenvalues via classical arithmetics. We demonstrate the application of preconditioned linear system solvers for computing single-particle Greens functions of quantum many-body systems, which are widely used in quantum physics, chemistry, and materials science. We analyze the complexities in three scenarios: the Hubbard model, the quantum many-body Hamiltonian in the planewave-dual basis, and the Schwinger model. We also provide a method for performing Greens function calculation in second quantization within a fixed particle manifold and note that this approach may be valuable for simulation more broadly. Besides solving linear systems, fast inversion also allows us to develop fast algorithms for computing matrix functions, such as the efficient preparation of Gibbs states. We introduce two efficient approaches for such a task, based on the contour integral formulation and the inverse transform respectively.
Markov Chain Monte Carlo methods become increasingly popular in applied mathematics as a tool for numerical integration with respect to complex and high-dimensional distributions. However, application of MCMC methods to heavy tailed distributions and distributions with analytically intractable densities turns out to be rather problematic. In this paper, we propose a novel approach towards the use of MCMC algorithms for distributions with analytically known Fourier transforms and, in particular, heavy tailed distributions. The main idea of the proposed approach is to use MCMC methods in Fourier domain to sample from a density proportional to the absolute value of the underlying characteristic function. A subsequent application of the Parsevals formula leads to an efficient algorithm for the computation of integrals with respect to the underlying density. We show that the resulting Markov chain in Fourier domain may be geometrically ergodic even in the case of heavy tailed original distributions. We illustrate our approach by several numerical examples including multivariate elliptically contoured stable distributions.
Statistical analysis of massive datasets very often implies expensive linear algebra operations with large dense matrices. Typical tasks are an estimation of unknown parameters of the underlying statistical model and prediction of missing values. We developed the H-MLE procedure, which solves these tasks. The unknown parameters can be estimated by maximizing the joint Gaussian log-likelihood function, which depends on a covariance matrix. To decrease high computational cost, we approximate the covariance matrix in the hierarchical (H-) matrix format. The H-matrix technique allows us to work with inhomogeneous covariance matrices and almost arbitrary locations. Especially, H-matrices can be applied in cases when the matrices under consideration are dense and unstructured. For validation purposes, we implemented three machine learning methods: the k-nearest neighbors (kNN), random forest, and deep neural network. The best results (for the given datasets) were obtained by the kNN method with three or seven neighbors depending on the dataset. The results computed with the H-MLE method were compared with the results obtained by the kNN method. The developed H-matrix code and all datasets are freely available online.
Given an arbitrary matrix $Ainmathbb{R}^{ntimes n}$, we consider the fundamental problem of computing $Ax$ for any $xinmathbb{R}^n$ such that $Ax$ is $s$-sparse. While fast algorithms exist for particular choices of $A$, such as the discrete Fourier transform, there is currently no $o(n^2)$ algorithm that treats the unstructured case. In this paper, we devise a randomized approach to tackle the unstructured case. Our method relies on a representation of $A$ in terms of certain real-valued mutually unbiased bases derived from Kerdock sets. In the preprocessing phase of our algorithm, we compute this representation of $A$ in $O(n^3log n)$ operations. Next, given any unit vector $xinmathbb{R}^n$ such that $Ax$ is $s$-sparse, our randomized fast transform uses this representation of $A$ to compute the entrywise $epsilon$-hard threshold of $Ax$ with high probability in only $O(sn + epsilon^{-2}|A|_{2toinfty}^2nlog n)$ operations. In addition to a performance guarantee, we provide numerical results that demonstrate the plausibility of real-world implementation of our algorithm.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا