ترغب بنشر مسار تعليمي؟ اضغط هنا

Bayesian Variable Selection and Estimation Based on Global-Local Shrinkage Priors

114   0   0.0 ( 0 )
 نشر من قبل Xueying Tang
 تاريخ النشر 2016
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we consider Bayesian variable selection problem of linear regression model with global-local shrinkage priors on the regression coefficients. We propose a variable selection procedure that select a variable if the ratio of the posterior mean to the ordinary least square estimate of the corresponding coefficient is greater than $1/2$. Under the assumption of orthogonal designs, we show that if the local parameters have polynomial-tailed priors, our proposed method enjoys the oracle property in the sense that it can achieve variable selection consistency and optimal estimation rate at the same time. However, if, instead, an exponential-tailed prior is used for the local parameters, the proposed method does not have the oracle property.

قيم البحث

اقرأ أيضاً

An important task in building regression models is to decide which regressors should be included in the final model. In a Bayesian approach, variable selection can be performed using mixture priors with a spike and a slab component for the effects su bject to selection. As the spike is concentrated at zero, variable selection is based on the probability of assigning the corresponding regression effect to the slab component. These posterior inclusion probabilities can be determined by MCMC sampling. In this paper we compare the MCMC implementations for several spike and slab priors with regard to posterior inclusion probabilities and their sampling efficiency for simulated data. Further, we investigate posterior inclusion probabilities analytically for different slabs in two simple settings. Application of variable selection with spike and slab priors is illustrated on a data set of psychiatric patients where the goal is to identify covariates affecting metabolism.
106 - Ray Bai , Malay Ghosh 2017
We consider sparse Bayesian estimation in the classical multivariate linear regression model with $p$ regressors and $q$ response variables. In univariate Bayesian linear regression with a single response $y$, shrinkage priors which can be expressed as scale mixtures of normal densities are popular for obtaining sparse estimates of the coefficients. In this paper, we extend the use of these priors to the multivariate case to estimate a $p times q$ coefficients matrix $mathbf{B}$. We derive sufficient conditions for posterior consistency under the Bayesian multivariate linear regression framework and prove that our method achieves posterior consistency even when $p>n$ and even when $p$ grows at nearly exponential rate with the sample size. We derive an efficient Gibbs sampling algorithm and provide the implementation in a comprehensive R package called MBSP. Finally, we demonstrate through simulations and data analysis that our model has excellent finite sample performance.
185 - Zichen Ma , Ernest Fokoue 2015
In this paper, we introduce a new methodology for Bayesian variable selection in linear regression that is independent of the traditional indicator method. A diagonal matrix $mathbf{G}$ is introduced to the prior of the coefficient vector $boldsymbol {beta}$, with each of the $g_j$s, bounded between $0$ and $1$, on the diagonal serves as a stabilizer of the corresponding $beta_j$. Mathematically, a promising variable has a $g_j$ value that is close to $0$, whereas the value of $g_j$ corresponding to an unpromising variable is close to $1$. This property is proven in this paper under orthogonality together with other asymptotic properties. Computationally, the sample path of each $g_j$ is obtained through Metropolis-within-Gibbs sampling method. Also, in this paper we give two simulations to verify the capability of this methodology in variable selection.
This paper presents objective priors for robust Bayesian estimation against outliers based on divergences. The minimum $gamma$-divergence estimator is well-known to work well estimation against heavy contamination. The robust Bayesian methods by usin g quasi-posterior distributions based on divergences have been also proposed in recent years. In objective Bayesian framework, the selection of default prior distributions under such quasi-posterior distributions is an important problem. In this study, we provide some properties of reference and moment matching priors under the quasi-posterior distribution based on the $gamma$-divergence. In particular, we show that the proposed priors are approximately robust under the condition on the contamination distribution without assuming any conditions on the contamination ratio. Some simulation studies are also presented.
We address the problem of dynamic variable selection in time series regression with unknown residual variances, where the set of active predictors is allowed to evolve over time. To capture time-varying variable selection uncertainty, we introduce ne w dynamic shrinkage priors for the time series of regression coefficients. These priors are characterized by two main ingredients: smooth parameter evolutions and intermittent zeroes for modeling predictive breaks. More formally, our proposed Dynamic Spike-and-Slab (DSS) priors are constructed as mixtures of two processes: a spike process for the irrelevant coefficients and a slab autoregressive process for the active coefficients. The mixing weights are themselves time-varying and depend on lagged values of the series. Our DSS priors are probabilistically coherent in the sense that their stationary distribution is fully known and characterized by spike-and-slab marginals. For posterior sampling over dynamic regression coefficients, model selection indicators as well as unknown dynamic residual variances, we propose a Dynamic SSVS algorithm based on forward-filtering and backward-sampling. To scale our method to large data sets, we develop a Dynamic EMVS algorithm for MAP smoothing. We demonstrate, through simulation and a topical macroeconomic dataset, that DSS priors are very effective at separating active and noisy coefficients. Our fast implementation significantly extends the reach of spike-and-slab methods to large time series data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا