Do you want to publish a course? Click here

Improving multilevel regression and poststratification with structured priors

327   0   0.0 ( 0 )
 Added by Yuxiang Gao
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

A central theme in the field of survey statistics is estimating population-level quantities through data coming from potentially non-representative samples of the population. Multilevel Regression and Poststratification (MRP), a model-based approach, is gaining traction against the traditional weighted approach for survey estimates. MRP estimates are susceptible to bias if there is an underlying structure that the methodology does not capture. This work aims to provide a new framework for specifying structured prior distributions that lead to bias reduction in MRP estimates. We use simulation studies to explore the benefit of these prior distributions and demonstrate their efficacy on non-representative US survey data. We show that structured prior distributions offer absolute bias reduction and variance reduction for posterior MRP estimates in a large variety of data regimes.



rate research

Read More

Multilevel regression and poststratification (MRP) is a flexible modeling technique that has been used in a broad range of small-area estimation problems. Traditionally, MRP studies have been focused on non-causal settings, where estimating a single population value using a nonrepresentative sample was of primary interest. In this manuscript, MRP-style estimators will be evaluated in an experimental causal inference setting. We simulate a large-scale randomized control trial with a stratified cluster sampling design, and compare traditional and nonparametric treatment effect estimation methods with MRP methodology. Using MRP-style estimators, treatment effect estimates for areas as small as 1.3$%$ of the population have lower bias and variance than standard causal inference methods, even in the presence of treatment effect heterogeneity. The design of our simulation studies also requires us to build upon a MRP variant that allows for non-census covariates to be incorporated into poststratification.
259 - Kolyan Ray , Botond Szabo 2019
We study a mean-field spike and slab variational Bayes (VB) approximation to Bayesian model selection priors in sparse high-dimensional linear regression. Under compatibility conditions on the design matrix, oracle inequalities are derived for the mean-field VB approximation, implying that it converges to the sparse truth at the optimal rate and gives optimal prediction of the response vector. The empirical performance of our algorithm is studied, showing that it works comparably well as other state-of-the-art Bayesian variable selection methods. We also numerically demonstrate that the widely used coordinate-ascent variational inference (CAVI) algorithm can be highly sensitive to the parameter updating order, leading to potentially poor performance. To mitigate this, we propose a novel prioritized updating scheme that uses a data-driven updating order and performs better in simulations. The variational algorithm is implemented in the R package sparsevb.
232 - Zichen Ma , Ernest Fokoue 2015
In this paper, we introduce a new methodology for Bayesian variable selection in linear regression that is independent of the traditional indicator method. A diagonal matrix $mathbf{G}$ is introduced to the prior of the coefficient vector $boldsymbol{beta}$, with each of the $g_j$s, bounded between $0$ and $1$, on the diagonal serves as a stabilizer of the corresponding $beta_j$. Mathematically, a promising variable has a $g_j$ value that is close to $0$, whereas the value of $g_j$ corresponding to an unpromising variable is close to $1$. This property is proven in this paper under orthogonality together with other asymptotic properties. Computationally, the sample path of each $g_j$ is obtained through Metropolis-within-Gibbs sampling method. Also, in this paper we give two simulations to verify the capability of this methodology in variable selection.
Quantile regression is studied in combination with a penalty which promotes structured (or group) sparsity. A mixed $ell_{1,infty}$-norm on the parameter vector is used to impose structured sparsity on the traditional quantile regression problem. An algorithm is derived to calculate the piece-wise linear solution path of the corresponding minimization problem. A Matlab implementation of the proposed algorithm is provided and some applications of the methods are also studied.
We propose a novel spike and slab prior specification with scaled beta prime marginals for the importance parameters of regression coefficients to allow for general effect selection within the class of structured additive distributional regression. This enables us to model effects on all distributional parameters for arbitrary parametric distributions, and to consider various effect types such as non-linear or spatial effects as well as hierarchical regression structures. Our spike and slab prior relies on a parameter expansion that separates blocks of regression coefficients into overall scalar importance parameters and vectors of standardised coefficients. Hence, we can work with a scalar quantity for effect selection instead of a possibly high-dimensional effect vector, which yields improved shrinkage and sampling performance compared to the classical normal-inverse-gamma prior. We investigate the propriety of the posterior, show that the prior yields desirable shrinkage properties, propose a way of eliciting prior parameters and provide efficient Markov Chain Monte Carlo sampling. Using both simulated and three large-scale data sets, we show that our approach is applicable for data with a potentially large number of covariates, multilevel predictors accounting for hierarchically nested data and non-standard response distributions, such as bivariate normal or zero-inflated Poisson.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا