Making Contrastive Learning Robust to Shortcuts


Abstract in English

Contrastive learning is one of the fastest growing research areas in machine learning due to its ability to learn useful representations without labeled data. However, contrastive learning is susceptible to shortcuts - i.e., it may learn shortcut features irrelevant to the task of interest, and discard relevant information. Past work has addressed this limitation via handcrafted data augmentations that eliminate the shortcut. But, manually crafted augmentations do not work across all datasets and tasks. Further, data augmentations fail in addressing shortcuts in multi-attribute classification when one attribute acts as a shortcut around other attributes. In this paper, we analyze the objective function of contrastive learning and formally prove that it is vulnerable to shortcuts. We then present reconstructive contrastive learning (RCL), a framework for learning unsupervised representations that are robust to shortcuts. The key idea is to force the learned representation to reconstruct the input, which naturally counters potential shortcuts. Extensive experiments verify that RCL is highly robust to shortcuts and outperforms state-of-the-art contrastive learning methods on a variety of datasets and tasks.

Download