ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal Covariate Balancing Conditions in Propensity Score Estimation

123   0   0.0 ( 0 )
 نشر من قبل Yang Ning
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Inverse probability of treatment weighting (IPTW) is a popular method for estimating the average treatment effect (ATE). However, empirical studies show that the IPTW estimators can be sensitive to the misspecification of the propensity score model. To address this problem, researchers have proposed to estimate propensity score by directly optimizing the balance of pre-treatment covariates. While these methods appear to empirically perform well, little is known about how the choice of balancing conditions affects their theoretical properties. To fill this gap, we first characterize the asymptotic bias and efficiency of the IPTW estimator based on the Covariate Balancing Propensity Score (CBPS) methodology under local model misspecification. Based on this analysis, we show how to optimally choose the covariate balancing functions and propose an optimal CBPS-based IPTW estimator. This estimator is doubly robust; it is consistent for the ATE if either the propensity score model or the outcome model is correct. In addition, the proposed estimator is locally semiparametric efficient when both models are correctly specified. To further relax the parametric assumptions, we extend our method by using a sieve estimation approach. We show that the resulting estimator is globally efficient under a set of much weaker assumptions and has a smaller asymptotic bias than the existing estimators. Finally, we evaluate the finite sample performance of the proposed estimators via simulation and empirical studies. An open-source software package is available for implementing the proposed methods.



قيم البحث

اقرأ أيضاً

153 - Kangjie Zhou , Jinzhu Jia 2021
In this paper, we propose a propensity score adapted variable selection procedure to select covariates for inclusion in propensity score models, in order to eliminate confounding bias and improve statistical efficiency in observational studies. Our v ariable selection approach is specially designed for causal inference, it only requires the propensity scores to be $sqrt{n}$-consistently estimated through a parametric model and need not correct specification of potential outcome models. By using estimated propensity scores as inverse probability treatment weights in performing an adaptive lasso on the outcome, it successfully excludes instrumental variables, and includes confounders and outcome predictors. We show its oracle properties under the linear association conditions. We also perform some numerical simulations to illustrate our propensity score adapted covariate selection procedure and evaluate its performance under model misspecification. Comparison to other covariate selection methods is made using artificial data as well, through which we find that it is more powerful in excluding instrumental variables and spurious covariates.
We propose novel estimators for categorical and continuous treatments by using an optimal covariate balancing strategy for inverse probability weighting. The resulting estimators are shown to be consistent and asymptotically normal for causal contras ts of interest, either when the model explaining treatment assignment is correctly specified, or when the correct set of bases for the outcome models has been chosen and the assignment model is sufficiently rich. For the categorical treatment case, we show that the estimator attains the semiparametric efficiency bound when all models are correctly specified. For the continuous case, the causal parameter of interest is a function of the treatment dose. The latter is not parametrized and the estimators proposed are shown to have bias and variance of the classical nonparametric rate. Asymptotic results are complemented with simulations illustrating the finite sample properties. Our analysis of a data set suggests a nonlinear effect of BMI on the decline in self reported health.
Selective inference (post-selection inference) is a methodology that has attracted much attention in recent years in the fields of statistics and machine learning. Naive inference based on data that are also used for model selection tends to show an overestimation, and so the selective inference conditions the event that the model was selected. In this paper, we develop selective inference in propensity score analysis with a semiparametric approach, which has become a standard tool in causal inference. Specifically, for the most basic causal inference model in which the causal effect can be written as a linear sum of confounding variables, we conduct Lasso-type variable selection by adding an $ell_1$ penalty term to the loss function that gives a semiparametric estimator. Confidence intervals are then given for the coefficients of the selected confounding variables, conditional on the event of variable selection, with asymptotic guarantees. An important property of this method is that it does not require modeling of nonparametric regression functions for the outcome variables, as is usually the case with semiparametric propensity score analysis.
In observational clinic registries, time to treatment is often of interest, but treatment can be given at any time during follow-up and there is no structure or intervention to ensure regular clinic visits for data collection. To address these challe nges, we introduce the time-dependent propensity process as a generalization of the propensity score. We show that the propensity process balances the entire time-varying covariate history which cannot be achieved by existing propensity score methods and that treatment assignment is strongly ignorable conditional on the propensity process. We develop methods for estimating the propensity process using observed data and for matching based on the propensity process. We illustrate the propensity process method using the Emory Amyotrophic Lateral Sclerosis (ALS) Registry data.
A central goal in designing clinical trials is to find the test that maximizes power (or equivalently minimizes required sample size) for finding a true research hypothesis subject to the constraint of type I error. When there is more than one test, such as in clinical trials with multiple endpoints, the issues of optimal design and optimal policies become more complex. In this paper we address the question of how such optimal tests should be defined and how they can be found. We review different notions of power and how they relate to study goals, and also consider the requirements of type I error control and the nature of the policies. This leads us to formulate the optimal policy problem as an explicit optimization problem with objective and constraints which describe its specific desiderata. We describe a complete solution for deriving optimal policies for two hypotheses, which have desired monotonicity properties, and are computationally simple. For some of the optimization formulations this yields optimal policies that are identical to existing policies, such as Hommels procedure or the procedure of Bittman et al. (2009), while for others it yields completely novel and more powerful policies than existing ones. We demonstrate the nature of our novel policies and their improved power extensively in simulation and on the APEX study (Cohen et al., 2016).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا