ترغب بنشر مسار تعليمي؟ اضغط هنا

Worth Weighting? How to Think About and Use Weights in Survey Experiments

67   0   0.0 ( 0 )
 نشر من قبل Luis Fernando Campos
 تاريخ النشر 2017
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The popularity of online surveys has increased the prominence of using weights that capture units probabilities of inclusion for claims of representativeness. Yet, much uncertainty remains regarding how these weights should be employed in the analysis of survey experiments: Should they be used or ignored? If they are used, which estimators are preferred? We offer practical advice, rooted in the Neyman-Rubin model, for researchers producing and working with survey experimental data. We examine simple, efficient estimators for analyzing these data, and give formulae for their biases and variances. We provide simulations that examine these estimators as well as real examples from experiments administered online through YouGov. We find that for examining the existence of population treatment effects using high-quality, broadly representative samples recruited by top online survey firms, sample quantities, which do not rely on weights, are often sufficient. We found that Sample Average Treatment Effect (SATE) estimates did not appear to differ substantially from their weighted counterparts, and they avoided the substantial loss of statistical power that accompanies weighting. When precise estimates of Population Average Treatment Effects (PATE) are essential, we analytically show post-stratifying on survey weights and/or covariates highly correlated with the outcome to be a conservative choice. While we show these substantial gains in simulations, we find limited evidence of them in practice.



قيم البحث

اقرأ أيضاً

The Consent-to-Contact (C2C) registry at the University of California, Irvine collects data from community participants to aid in the recruitment to clinical research studies. Self-selection into the C2C likely leads to bias due in part to enrollees having more years of education relative to the US general population. Salazar et al. (2020) recently used the C2C to examine associations of race/ethnicity with participant willingness to be contacted about research studies. To address questions about generalizability of estimated associations we estimate propensity for self-selection into the convenience sample weights using data from the National Health and Nutrition Examination Survey (NHANES). We create a combined dataset of C2C and NHANES subjects and compare different approaches (logistic regression, covariate balancing propensity score, entropy balancing, and random forest) for estimating the probability of membership in C2C relative to NHANES. We propose methods to estimate the variance of parameter estimates that account for uncertainty that arises from estimating propensity weights. Simulation studies explore the impact of propensity weight estimation on uncertainty. We demonstrate the approach by repeating the analysis by Salazar et al. with the deduced propensity weights for the C2C subjects and contrast the results of the two analyses. This method can be implemented using our estweight package in R available on GitHub.
The inverse probability weighting approach is popular for evaluating treatment effects in observational studies, but extreme propensity scores could bias the estimator and induce excessive variance. Recently, the overlap weighting approach has been p roposed to alleviate this problem, which smoothly down-weighs the subjects with extreme propensity scores. Although advantages of overlap weighting have been extensively demonstrated in literature with continuous and binary outcomes, research on its performance with time-to-event or survival outcomes is limited. In this article, we propose two weighting estimators that combine propensity score weighting and inverse probability of censoring weighting to estimate the counterfactual survival functions. These estimators are applicable to the general class of balancing weights, which includes inverse probability weighting, trimming, and overlap weighting as special cases. We conduct simulations to examine the empirical performance of these estimators with different weighting schemes in terms of bias, variance, and 95% confidence interval coverage, under various degree of covariate overlap between treatment groups and censoring rate. We demonstrate that overlap weighting consistently outperforms inverse probability weighting and associated trimming methods in bias, variance, and coverage for time-to-event outcomes, and the advantages increase as the degree of covariate overlap between the treatment groups decreases.
Background: There is increasing interest in approaches for analyzing the effect of exposure mixtures on health. A key issue is how to simultaneously analyze often highly collinear components of the mixture, which can create problems such as confoundi ng by co-exposure and co-exposure amplification bias (CAB). Evaluation of novel mixtures methods, typically using synthetic data, is critical to their ultimate utility. Objectives: This paper aims to answer two questions. How do causal models inform the interpretation of statistical models and the creation of synthetic data used to test them? Are novel mixtures methods susceptible to CAB? Methods: We use directed acyclic graphs (DAGs) and linear models to derive closed form solutions for model parameters to examine how underlying causal assumptions affect the interpretation of model results. Results: The same beta coefficients estimated by a statistical model can have different interpretations depending on the assumed causal structure. Similarly, the method used to simulate data can have implications for the underlying DAG (and vice versa), and therefore the identification of the parameter being estimated with an analytic approach. We demonstrate that methods that can reproduce results of linear regression, such as Bayesian kernel machine regression and the new quantile g-computation approach, will be subject to CAB. However, under some conditions, estimates of an overall effect of the mixture is not subject to CAB and even has reduced uncontrolled bias. Discussion: Just as DAGs encode a priori subject matter knowledge allowing identification of variable control needed to block analytic bias, we recommend explicitly identifying DAGs underlying synthetic data created to test statistical mixtures approaches. Estimates of the total effect of a mixture is an important but relatively underexplored topic that warrants further investigation.
Treatment switching in a randomized controlled trial is said to occur when a patient randomized to one treatment arm switches to another treatment arm during follow-up. This can occur at the point of disease progression, whereby patients in the contr ol arm may be offered the experimental treatment. It is widely known that failure to account for treatment switching can seriously dilute the estimated effect of treatment on overall survival. In this paper, we aim to account for the potential impact of treatment switching in a re-analysis evaluating the treatment effect of NucleosideReverse Transcriptase Inhibitors (NRTIs) on a safety outcome (time to first severe or worse sign or symptom) in participants receiving a new antiretroviral regimen that either included or omitted NRTIs in the Optimized Treatment That Includes or OmitsNRTIs (OPTIONS) trial. We propose an estimator of a treatment causal effect under a structural cumulative survival model (SCSM) that leverages randomization as an instrumental variable to account for selective treatment switching. Unlike Robins accelerated failure time model often used to address treatment switching, the proposed approach avoids the need for artificial censoring for estimation. We establish that the proposed estimator is uniformly consistent and asymptotically Gaussian under standard regularity conditions. A consistent variance estimator is also given and a simple resampling approach provides uniform confidence bands for the causal difference comparing treatment groups overtime on the cumulative intensity scale. We develop an R package named ivsacim implementing all proposed methods, freely available to download from R CRAN. We examine the finite performance of the estimator via extensive simulations.
144 - Tomokazu Konishi 2012
Motivation: Although principal component analysis is frequently applied to reduce the dimensionality of matrix data, the method is sensitive to noise and bias and has difficulty with comparability and interpretation. These issues are addressed by imp roving the fidelity to the study design. Principal axes and the components for variables are found through the arrangement of the training data set, and the centers of data are found according to the design. By using both the axes and the center, components for an observation that belong to various studies can be separately estimated. Both of the components for variables and observations are scaled to a unit length, which enables relationships to be seen between them. Results: Analyses in transcriptome studies showed an improvement in the separation of experimental groups and in robustness to bias and noise. Unknown samples were appropriately classified on predetermined axes. These axes well reflected the study design, and this facilitated the interpretation. Together, the introduced concepts resulted in improved generality and objectivity in the analytical results, with the ability to locate hidden structures in the data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا