Do you want to publish a course? Click here

Fully Bayesian Penalized Regression with a Generalized Bridge Prior

128   0   0.0 ( 0 )
 Added by Ding Xiang
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

We consider penalized regression models under a unified framework where the particular method is determined by the form of the penalty term. We propose a fully Bayesian approach that incorporates both sparse and dense settings and show how to use a type of model averaging approach to eliminate the nuisance penalty parameters and perform inference through the marginal posterior distribution of the regression coefficients. We establish tail robustness of the resulting estimator as well as conditional and marginal posterior consistency. We develop an efficient component-wise Markov chain Monte Carlo algorithm for sampling. Numerical results show that the method tends to select the optimal penalty and performs well in both variable selection and prediction and is comparable to, and often better than alternative methods. Both simulated and real data examples are provided.



rate research

Read More

In this article, we consider a non-parametric Bayesian approach to multivariate quantile regression. The collection of related conditional distributions of a response vector Y given a univariate covariate X is modeled using a Dependent Dirichlet Process (DDP) prior. The DDP is used to introduce dependence across x. As the realizations from a Dirichlet process prior are almost surely discrete, we need to convolve it with a kernel. To model the error distribution as flexibly as possible, we use a countable mixture of multidimensional normal distributions as our kernel. For posterior computations, we use a truncated stick-breaking representation of the DDP. This approximation enables us to deal with only a finitely number of parameters. We use a Block Gibbs sampler for estimating the model parameters. We illustrate our method with simulation studies and real data applications. Finally, we provide a theoretical justification for the proposed method through posterior consistency. Our proposed procedure is new even when the response is univariate.
This article is concerned with the Bridge Regression, which is a special family in penalized regression with penalty function $sum_{j=1}^{p}|beta_j|^q$ with $q>0$, in a linear model with linear restrictions. The proposed restricted bridge (RBRIDGE) estimator simultaneously estimates parameters and selects important variables when a prior information about parameters are available in either low dimensional or high dimensional case. Using local quadratic approximation, the penalty term can be approximated around a local initial values vector and the RBRIDGE estimator enjoys a closed-form expression which can be solved when $q>0$. Special cases of our proposal are the restricted LASSO ($q=1$), restricted RIDGE ($q=2$), and restricted Elastic Net ($1< q < 2$) estimators. We provide some theoretical properties of the RBRIDGE estimator under for the low dimensional case, whereas the computational aspects are given for both low and high dimensional cases. An extensive Monte Carlo simulation study is conducted based on different prior pieces of information and the performance of the RBRIDGE estiamtor is compared with some competitive penalty estimators as well as the ORACLE. We also consider four real data examples analysis for comparison sake. The numerical results show that the suggested RBRIDGE estimator outperforms outstandingly when the prior is true or near exact
Model fitting often aims to fit a single model, assuming that the imposed form of the model is correct. However, there may be multiple possible underlying explanatory patterns in a set of predictors that could explain a response. Model selection without regarding model uncertainty can fail to bring these patterns to light. We present multi-model penalized regression (MMPR) to acknowledge model uncertainty in the context of penalized regression. In the penalty form explored here, we examine how different settings can promote either shrinkage or sparsity of coefficients in separate models. The method is tuned to explicitly limit model similarity. A choice of penalty form that enforces variable selection is applied to predict stacking fault energy (SFE) from steel alloy composition. The aim is to identify multiple models with different subsets of covariates that explain a single type of response.
Record linkage (de-duplication or entity resolution) is the process of merging noisy databases to remove duplicate entities. While record linkage removes duplicate entities from such databases, the downstream task is any inferential, predictive, or post-linkage task on the linked data. One goal of the downstream task is obtaining a larger reference data set, allowing one to perform more accurate statistical analyses. In addition, there is inherent record linkage uncertainty passed to the downstream task. Motivated by the above, we propose a generalized Bayesian record linkage method and consider multiple regression analysis as the downstream task. Records are linked via a random partition model, which allows for a wide class to be considered. In addition, we jointly model the record linkage and downstream task, which allows one to account for the record linkage uncertainty exactly. Moreover, one is able to generate a feedback propagation mechanism of the information from the proposed Bayesian record linkage model into the downstream task. This feedback effect is essential to eliminate potential biases that can jeopardize resulting downstream task. We apply our methodology to multiple linear regression, and illustrate empirically that the feedback effect is able to improve the performance of record linkage.
Modern applications require methods that are computationally feasible on large datasets but also preserve statistical efficiency. Frequently, these two concerns are seen as contradictory: approximation methods that enable computation are assumed to degrade statistical performance relative to exact methods. In applied mathematics, where much of the current theoretical work on approximation resides, the inputs are considered to be observed exactly. The prevailing philosophy is that while the exact problem is, regrettably, unsolvable, any approximation should be as small as possible. However, from a statistical perspective, an approximate or regularized solution may be preferable to the exact one. Regularization formalizes a trade-off between fidelity to the data and adherence to prior knowledge about the data-generating process such as smoothness or sparsity. The resulting estimator tends to be more useful, interpretable, and suitable as an input to other methods. In this paper, we propose new methodology for estimation and prediction under a linear model borrowing insights from the approximation literature. We explore these procedures from a statistical perspective and find that in many cases they improve both computational and statistical performance.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا