Do you want to publish a course? Click here

Addressing Extreme Propensity Scores in Estimating Counterfactual Survival Functions via the Overlap Weights

77   0   0.0 ( 0 )
 Added by Chao Cheng
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

The inverse probability weighting approach is popular for evaluating treatment effects in observational studies, but extreme propensity scores could bias the estimator and induce excessive variance. Recently, the overlap weighting approach has been proposed to alleviate this problem, which smoothly down-weighs the subjects with extreme propensity scores. Although advantages of overlap weighting have been extensively demonstrated in literature with continuous and binary outcomes, research on its performance with time-to-event or survival outcomes is limited. In this article, we propose two weighting estimators that combine propensity score weighting and inverse probability of censoring weighting to estimate the counterfactual survival functions. These estimators are applicable to the general class of balancing weights, which includes inverse probability weighting, trimming, and overlap weighting as special cases. We conduct simulations to examine the empirical performance of these estimators with different weighting schemes in terms of bias, variance, and 95% confidence interval coverage, under various degree of covariate overlap between treatment groups and censoring rate. We demonstrate that overlap weighting consistently outperforms inverse probability weighting and associated trimming methods in bias, variance, and coverage for time-to-event outcomes, and the advantages increase as the degree of covariate overlap between the treatment groups decreases.



rate research

Read More

A straightforward application of semi-supervised machine learning to the problem of treatment effect estimation would be to consider data as unlabeled if treatment assignment and covariates are observed but outcomes are unobserved. According to this formulation, large unlabeled data sets could be used to estimate a high dimensional propensity function and causal inference using a much smaller labeled data set could proceed via weighted estimators using the learned propensity scores. In the limiting case of infinite unlabeled data, one may estimate the high dimensional propensity function exactly. However, longstanding advice in the causal inference community suggests that estimated propensity scores (from labeled data alone) are actually preferable to true propensity scores, implying that the unlabeled data is actually useless in this context. In this paper we examine this paradox and propose a simple procedure that reconciles the strong intuition that a known propensity functions should be useful for estimating treatment effects with the previous literature suggesting otherwise. Further, simulation studies suggest that direct regression may be preferable to inverse-propensity weight estimators in many circumstances.
The Consent-to-Contact (C2C) registry at the University of California, Irvine collects data from community participants to aid in the recruitment to clinical research studies. Self-selection into the C2C likely leads to bias due in part to enrollees having more years of education relative to the US general population. Salazar et al. (2020) recently used the C2C to examine associations of race/ethnicity with participant willingness to be contacted about research studies. To address questions about generalizability of estimated associations we estimate propensity for self-selection into the convenience sample weights using data from the National Health and Nutrition Examination Survey (NHANES). We create a combined dataset of C2C and NHANES subjects and compare different approaches (logistic regression, covariate balancing propensity score, entropy balancing, and random forest) for estimating the probability of membership in C2C relative to NHANES. We propose methods to estimate the variance of parameter estimates that account for uncertainty that arises from estimating propensity weights. Simulation studies explore the impact of propensity weight estimation on uncertainty. We demonstrate the approach by repeating the analysis by Salazar et al. with the deduced propensity weights for the C2C subjects and contrast the results of the two analyses. This method can be implemented using our estweight package in R available on GitHub.
We study the problem of estimating a functional or a parameter in the context where outcome is subject to nonignorable missingness. We completely avoid modeling the regression relation, while allowing the propensity to be modeled by a semiparametric logistic relation where the dependence on covariates is unspecified. We discover a surprising phenomenon in that the estimation of the parameter in the propensity model as well as the functional estimation can be carried out without assessing the missingness dependence on covariates. This allows us to propose a general class of estimators for both model parameter estimation and functional estimation, including estimating the outcome mean. The robustness of the estimators are nonstandard and are established rigorously through theoretical derivations, and are supported by simulations and a data application.
Forest-based methods have recently gained in popularity for non-parametric treatment effect estimation. Building on this line of work, we introduce causal survival forests, which can be used to estimate heterogeneous treatment effects in a survival and observational setting where outcomes may be right-censored. Our approach relies on orthogonal estimating equations to robustly adjust for both censoring and selection effects. In our experiments, we find our approach to perform well relative to a number of baselines.
The Cox regression model and its associated hazard ratio (HR) are frequently used for summarizing the effect of treatments on time to event outcomes. However, the HRs interpretation strongly depends on the assumed underlying survival model. The challenge of interpreting the HR has been the focus of a number of recent works. Besides, several alternative measures have been proposed in order to deal with these concerns. The marginal Cox regression models include an identifiable hazard ratio without individual but populational causal interpretation. In this work, we study the properties of one particular marginal Cox regression model and consider its estimation in the presence of omitted confounder. We prove the large sample consistency of an estimation score which allows non-binary treatments. Our Monte Carlo simulations suggest that finite sample behavior of the procedure is adequate. The studied estimator is more robust than its competitors for weak instruments although it is slightly more biased for large effects of the treatment. The practical use of the presented techniques is illustrated through a real practical example using data from the vascular quality initiative registry. The used R code is provided as Supplementary Material.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا