Do you want to publish a course? Click here

Constrained inference through posterior projections

64   0   0.0 ( 0 )
 Added by Deborshee Sen
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Bayesian approaches are appealing for constrained inference problems in allowing a probabilistic characterization of uncertainty, while providing a computational machinery for incorporating complex constraints in hierarchical models. However, the usual Bayesian strategy of placing a prior on the constrained space and conducting posterior computation with Markov chain Monte Carlo algorithms is often intractable. An alternative is to conduct inference for a less constrained posterior and project samples to the constrained space through a minimal distance mapping. We formalize and provide a unifying framework for such posterior projections. For theoretical tractability, we initially focus on constrained parameter spaces corresponding to closed and convex subsets of the original space. We then consider non-convex Stiefel manifolds. We provide a general formulation of the projected posterior and show that it can be viewed as an update of a data-dependent prior with the likelihood for particular classes of priors and likelihood functions. We also show that asymptotic properties of the unconstrained posterior are transferred to the projected posterior. Posterior projections are illustrated through multiple examples, both in simulation studies and real data applications.

rate research

Read More

62 - Masahiro Tanaka 2018
A local projection is a statistical framework that accounts for the relationship between an exogenous variable and an endogenous variable, measured at different time points. Local projections are often applied in impulse response analyses and direct forecasting. While local projections are becoming increasingly popular because of their robustness to misspecification and their flexibility, they are less statistically efficient than standard methods, such as vector autoregression. In this study, we seek to improve the statistical efficiency of local projections by developing a fully Bayesian approach that can be used to estimate local projections using roughness penalty priors. By incorporating such prior-induced smoothness, we can use information contained in successive observations to enhance the statistical efficiency of an inference. We apply the proposed approach to an analysis of monetary policy in the United States, showing that the roughness penalty priors successfully estimate the impulse response functions and improve the predictive accuracy of local projections.
Marginal structural models (MSM) with inverse probability weighting (IPW) are used to estimate causal effects of time-varying treatments, but can result in erratic finite-sample performance when there is low overlap in covariate distributions across different treatment patterns. Modifications to IPW which target the average treatment effect (ATE) estimand either introduce bias or rely on unverifiable parametric assumptions and extrapolation. This paper extends an alternate estimand, the average treatment effect on the overlap population (ATO) which is estimated on a sub-population with a reasonable probability of receiving alternate treatment patterns in time-varying treatment settings. To estimate the ATO within a MSM framework, this paper extends a stochastic pruning method based on the posterior predictive treatment assignment (PPTA) as well as a weighting analogue to the time-varying treatment setting. Simulations demonstrate the performance of these extensions compared against IPW and stabilized weighting with regard to bias, efficiency and coverage. Finally, an analysis using these methods is performed on Medicare beneficiaries residing across 18,480 zip codes in the U.S. to evaluate the effect of coal-fired power plant emissions exposure on ischemic heart disease hospitalization, accounting for seasonal patterns that lead to change in treatment over time.
The prior distribution on parameters of a likelihood is the usual starting point for Bayesian uncertainty quantification. In this paper, we present a different perspective. Given a finite data sample $Y_{1:n}$ of size $n$ from an infinite population, we focus on the missing $Y_{n+1:infty}$ as the source of statistical uncertainty, with the parameter of interest being known precisely given $Y_{1:infty}$. We argue that the foundation of Bayesian inference is to assign a predictive distribution on $Y_{n+1:infty}$ conditional on $Y_{1:n}$, which then induces a distribution on the parameter of interest. Demonstrating an application of martingales, Doob shows that choosing the Bayesian predictive distribution returns the conventional posterior as the distribution of the parameter. Taking this as our cue, we relax the predictive machine, avoiding the need for the predictive to be derived solely from the usual prior to posterior to predictive density formula. We introduce the martingale posterior distribution, which returns Bayesian uncertainty directly on any statistic of interest without the need for the likelihood and prior, and this distribution can be sampled through a computational scheme we name predictive resampling. To that end, we introduce new predictive methodologies for multivariate density estimation, regression and classification that build upon recent work on bivariate copulas.
167 - Yukito Iba , Keisuke Yano 2021
We introduce an information criterion, PCIC, for predictive evaluation based on quasi-posterior distributions. It is regarded as a natural generalisation of the widely applicable information criterion (WAIC) and can be computed via a single Markov chain Monte Carlo run. PCIC is useful in a variety of predictive settings that are not well dealt with in WAIC, including weighted likelihood inference and quasi-Bayesian prediction
Cluster randomized controlled trials (cRCTs) are designed to evaluate interventions delivered to groups of individuals. A practical limitation of such designs is that the number of available clusters may be small, resulting in an increased risk of baseline imbalance under simple randomization. Constrained randomization overcomes this issue by restricting the allocation to a subset of randomization schemes where sufficient overall covariate balance across comparison arms is achieved with respect to a pre-specified balance metric. However, several aspects of constrained randomization for the design and analysis of multi-arm cRCTs have not been fully investigated. Motivated by an ongoing multi-arm cRCT, we provide a comprehensive evaluation of the statistical properties of model-based and randomization-based tests under both simple and constrained randomization designs in multi-arm cRCTs, with varying combinations of design and analysis-based covariate adjustment strategies. In particular, as randomization-based tests have not been extensively studied in multi-arm cRCTs, we additionally develop most-powerful permutation tests under the linear mixed model framework for our comparisons. Our results indicate that under constrained randomization, both model-based and randomization-based analyses could gain power while preserving nominal type I error rate, given proper analysis-based adjustment for the baseline covariates. The choice of balance metrics and candidate set size and their implications on the testing of the pairwise and global hypotheses are also discussed. Finally, we caution against the design and analysis of multi-arm cRCTs with an extremely small number of clusters, due to insufficient degrees of freedom and the tendency to obtain an overly restricted randomization space.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا