Lasso Regression: Estimation and Shrinkage via Limit of Gibbs Sampling


Abstract in English

The application of the lasso is espoused in high-dimensional settings where only a small number of the regression coefficients are believed to be nonzero. Moreover, statistical properties of high-dimensional lasso estimators are often proved under the assumption that the correlation between the predictors is bounded. In this vein, coordinatewise methods, the most common means of computing the lasso solution, work well in the presence of low to moderate multicollinearity. The computational speed of coordinatewise algorithms degrades however as sparsity decreases and multicollinearity increases. Motivated by these limitations, we propose the novel Deterministic Bayesian Lasso algorithm for computing the lasso solution. This algorithm is developed by considering a limiting version of the Bayesian lasso. The performance of the Deterministic Bayesian Lasso improves as sparsity decreases and multicollinearity increases, and can offer substantial increases in computational speed. A rigorous theoretical analysis demonstrates that (1) the Deterministic Bayesian Lasso algorithm converges to the lasso solution, and (2) it leads to a representation of the lasso estimator which shows how it achieves both $ell_1$ and $ell_2$ types of shrinkage simultaneously. Connections to other algorithms are also provided. The benefits of the Deterministic Bayesian Lasso algorithm are then illustrated on simulated and real data.

Download