Counterfactual Invariance to Spurious Correlations: Why and How to Pass Stress Tests


Abstract in English

Informally, a `spurious correlation is the dependence of a model on some aspect of the input data that an analyst thinks shouldnt matter. In machine learning, these have a know-it-when-you-see-it character; e.g., changing the gender of a sentences subject changes a sentiment predictors output. To check for spurious correlations, we can `stress test models by perturbing irrelevant parts of input data and seeing if model predictions change. In this paper, we study stress testing using the tools of causal inference. We introduce emph{counterfactual invariance} as a formalization of the requirement that changing irrelevant parts of the input shouldnt change model predictions. We connect counterfactual invariance to out-of-domain model performance, and provide practical schemes for learning (approximately) counterfactual invariant predictors (without access to counterfactual examples). It turns out that both the means and implications of counterfactual invariance depend fundamentally on the true underlying causal structure of the data. Distinct causal structures require distinct regularization schemes to induce counterfactual invariance. Similarly, counterfactual invariance implies different domain shift guarantees depending on the underlying causal structure. This theory is supported by empirical results on text classification.

Download