ترغب بنشر مسار تعليمي؟ اضغط هنا

Bayesian Doubly Robust Causal Inference via Loss Functions

75   0   0.0 ( 0 )
 نشر من قبل Yu Luo
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Frequentist inference has a well-established supporting theory for doubly robust causal inference based on the potential outcomes framework, which is realized via outcome regression (OR) and propensity score (PS) models. The Bayesian counterpart, however, is not obvious as the PS model loses its balancing property in joint modeling. In this paper, we propose a natural and formal Bayesian solution by bridging loss-type Bayesian inference with a utility function derived from the notion of a pseudo-population via the change of measure. Consistency of the posterior distribution is shown with correctly specified and misspecified OR models. Simulation studies suggest that our proposed method can estimate the true causal effect more efficiently and achieve the frequentist coverage if either the OR model is correctly specified or fit with a flexible function of the confounders, compared to the previous Bayesian approach via the Bayesian bootstrap. Finally, we apply this novel Bayesian method to assess the impact of speed cameras on the reduction of car collisions in England.



قيم البحث

اقرأ أيضاً

The goal of causal inference is to understand the outcome of alternative courses of action. However, all causal inference requires assumptions. Such assumptions can be more influential than in typical tasks for probabilistic modeling, and testing tho se assumptions is important to assess the validity of causal inference. We develop model criticism for Bayesian causal inference, building on the idea of posterior predictive checks to assess model fit. Our approach involves decomposing the problem, separately criticizing the model of treatment assignments and the model of outcomes. Conditioned on the assumption of unconfoundedness---that the treatments are assigned independently of the potential outcomes---we show how to check any additional modeling assumption. Our approach provides a foundation for diagnosing model-based causal inferences.
This paper derives time-uniform confidence sequences (CS) for causal effects in experimental and observational settings. A confidence sequence for a target parameter $psi$ is a sequence of confidence intervals $(C_t)_{t=1}^infty$ such that every one of these intervals simultaneously captures $psi$ with high probability. Such CSs provide valid statistical inference for $psi$ at arbitrary stopping times, unlike classical fixed-time confidence intervals which require the sample size to be fixed in advance. Existing methods for constructing CSs focus on the nonasymptotic regime where certain assumptions (such as known bounds on the random variables) are imposed, while doubly robust estimators of causal effects rely on (asymptotic) semiparametric theory. We use sequenti
Due to concerns about parametric model misspecification, there is interest in using machine learning to adjust for confounding when evaluating the causal effect of an exposure on an outcome. Unfortunately, exposure effect estimators that rely on mach ine learning predictions are generally subject to so-called plug-in bias, which can render naive p-values and confidence intervals invalid. Progress has been made via proposals like targeted maximum likelihood estimation and more recently double machine learning, which rely on learning the conditional mean of both the outcome and exposure. Valid inference can then be obtained so long as both predictions converge (sufficiently fast) to the truth. Focusing on partially linear regression models, we show that a specific implementation of the machine learning techniques can yield exposure effect estimators that have small bias even when one of the first-stage predictions does not converge to the truth. The resulting tests and confidence intervals are doubly robust. We also show that the proposed estimators may fail to be regular when only one nuisance parameter is consistently estimated; nevertheless, we observe in simulation studies that our proposal leads to reduced bias and improved confidence interval coverage in moderate samples.
63 - Masahiro Tanaka 2019
This study proposes a new Bayesian approach to infer binary treatment effects. The approach treats counterfactual untreated outcomes as missing observations and infers them by completing a matrix composed of realized and potential untreated outcomes using a data augmentation technique. We also develop a tailored prior that helps in the identification of parameters and induces the matrix of untreated outcomes to be approximately low rank. Posterior draws are simulated using a Markov Chain Monte Carlo sampler. While the proposed approach is similar to synthetic control methods and other related methods, it has several notable advantages. First, unlike synthetic control methods, the proposed approach does not require stringent assumptions. Second, in contrast to non-Bayesian approaches, the proposed method can quantify uncertainty about inferences in a straightforward and consistent manner. By means of a series of simulation studies, we show that our proposal has a better finite sample performance than that of the existing approaches.
A large number of statistical models are doubly-intractable: the likelihood normalising term, which is a function of the model parameters, is intractable, as well as the marginal likelihood (model evidence). This means that standard inference techniq ues to sample from the posterior, such as Markov chain Monte Carlo (MCMC), cannot be used. Examples include, but are not confined to, massive Gaussian Markov random fields, autologistic models and Exponential random graph models. A number of approximate schemes based on MCMC techniques, Approximate Bayesian computation (ABC) or analytic approximations to the posterior have been suggested, and these are reviewed here. Exact MCMC schemes, which can be applied to a subset of doubly-intractable distributions, have also been developed and are described in this paper. As yet, no general method exists which can be applied to all classes of models with doubly-intractable posteriors. In addition, taking inspiration from the Physics literature, we study an alternative method based on representing the intractable likelihood as an infinite series. Unbiased estimates of the likelihood can then be obtained by finite time stochastic truncation of the series via Russian Roulette sampling, although the estimates are not necessarily positive. Results from the Quantum Chromodynamics literature are exploited to allow the use of possibly negative estimates in a pseudo-marginal MCMC scheme such that expectations with respect to the posterior distribution are preserved. The methodology is reviewed on well-known examples such as the parameters in Ising models, the posterior for Fisher-Bingham distributions on the $d$-Sphere and a large-scale Gaussian Markov Random Field model describing the Ozone Column data. This leads to a critical assessment of the strengths and weaknesses of the methodology with pointers to ongoing research.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا