ترغب بنشر مسار تعليمي؟ اضغط هنا

Measurement bias: a structural perspective

107   0   0.0 ( 0 )
 نشر من قبل Yingjie Zheng
 تاريخ النشر 2020
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The causal structure for measurement bias (MB) remains controversial. Aided by the Directed Acyclic Graph (DAG), this paper proposes a new structure for measuring one singleton variable whose MB arises in the selection of an imperfect I/O device-like measurement system. For effect estimation, however, an extra source of MB arises from any redundant association between a measured exposure and a measured outcome. The misclassification will be bidirectionally differential for a common outcome, unidirectionally differential for a causal relation, and non-differential for a common cause between the measured exposure and the measured outcome or a null effect. The measured exposure can actually affect the measured outcome, or vice versa. Reverse causality is a concept defined at the level of measurement. Our new DAGs have clarified the structures and mechanisms of MB.



قيم البحث

اقرأ أيضاً

We offer a natural and extensible measure-theoretic treatment of missingness at random. Within the standard missing data framework, we give a novel characterisation of the observed data as a stopping-set sigma algebra. We demonstrate that the usual m issingness at random conditions are equivalent to requiring particular stochastic processes to be adapted to a set-indexed filtration of the complete data: measurability conditions that suffice to ensure the likelihood factorisation necessary for ignorability. Our rigorous statement of the missing at random conditions also clarifies a common confusion: what is fixed, and what is random?
68 - Yufei Yi , Matey Neykov 2021
In this paper, we propose an abstract procedure for debiasing constrained or regularized potentially high-dimensional linear models. It is elementary to show that the proposed procedure can produce $frac{1}{sqrt{n}}$-confidence intervals for individu al coordinates (or even bounded contrasts) in models with unknown covariance, provided that the covariance has bounded spectrum. While the proof of the statistical guarantees of our procedure is simple, its implementation requires more care due to the complexity of the optimization programs we need to solve. We spend the bulk of this paper giving examples in which the proposed algorithm can be implemented in practice. One fairly general class of instances which are amenable to applications of our procedure include convex constrained least squares. We are able to translate the procedure to an abstract algorithm over this class of models, and we give concrete examples where efficient polynomial time methods for debiasing exist. Those include the constrained version of LASSO, regression under monotone constraints, regression with positive monotone constraints and non-negative least squares. In addition, we show that our abstract procedure can be applied to efficiently debias SLOPE and square-root SLOPE, among other popular regularized procedures under certain assumptions. We provide thorough simulation results in support of our theoretical findings.
Convolutional Neural Networks (CNNs) are known to rely more on local texture rather than global shape when making decisions. Recent work also indicates a close relationship between CNNs texture-bias and its robustness against distribution shift, adve rsarial perturbation, random corruption, etc. In this work, we attempt at improving various kinds of robustness universally by alleviating CNNs texture bias. With inspiration from the human visual system, we propose a light-weight model-agnostic method, namely Informative Dropout (InfoDrop), to improve interpretability and reduce texture bias. Specifically, we discriminate texture from shape based on local self-information in an image, and adopt a Dropout-like algorithm to decorrelate the model output from the local texture. Through extensive experiments, we observe enhanced robustness under various scenarios (domain generalization, few-shot classification, image corruption, and adversarial perturbation). To the best of our knowledge, this work is one of the earliest attempts to improve different kinds of robustness in a unified model, shedding new light on the relationship between shape-bias and robustness, also on new approaches to trustworthy machine learning algorithms. Code is available at https://github.com/bfshi/InfoDrop.
This paper develops a bias correction scheme for a multivariate normal model under a general parameterization. In the model, the mean vector and the covariance matrix share the same parameters. It includes many important regression models available i n the literature as special cases, such as (non)linear regression, errors-in-variables models, and so forth. Moreover, heteroscedastic situations may also be studied within our framework. We derive a general expression for the second-order biases of maximum likelihood estimates of the model parameters and show that it is always possible to obtain the second order bias by means of ordinary weighted lest-squares regressions. We enlighten such general expression with an errors-in-variables model and also conduct some simulations in order to verify the performance of the corrected estimates. The simulation results show that the bias correction scheme yields nearly unbiased estimators. We also present an empirical ilustration.
Knowledge distillation is an effective approach to leverage a well-trained network or an ensemble of them, named as the teacher, to guide the training of a student network. The outputs from the teacher network are used as soft labels for supervising the training of a new network. Recent studies citep{muller2019does,yuan2020revisiting} revealed an intriguing property of the soft labels that making labels soft serves as a good regularization to the student network. From the perspective of statistical learning, regularization aims to reduce the variance, however how bias and variance change is not clear for training with soft labels. In this paper, we investigate the bias-variance tradeoff brought by distillation with soft labels. Specifically, we observe that during training the bias-variance tradeoff varies sample-wisely. Further, under the same distillation temperature setting, we observe that the distillation performance is negatively associated with the number of some specific samples, which are named as regularization samples since these samples lead to bias increasing and variance decreasing. Nevertheless, we empirically find that completely filtering out regularization samples also deteriorates distillation performance. Our discoveries inspired us to propose the novel weighted soft labels to help the network adaptively handle the sample-wise bias-variance tradeoff. Experiments on standard evaluation benchmarks validate the effectiveness of our method. Our code is available at url{https://github.com/bellymonster/Weighted-Soft-Label-Distillation}.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا