NoiLIn: Do Noisy Labels Always Hurt Adversarial Training?


الملخص بالإنكليزية

Adversarial training (AT) based on minimax optimization is a popular learning style that enhances the models adversarial robustness. Noisy labels (NL) commonly undermine the learning and hurt the models performance. Interestingly, both research directions hardly crossover and hit sparks. In this paper, we raise an intriguing question -- Does NL always hurt AT? Firstly, we find that NL injection in inner maximization for generating adversarial data augments natural data implicitly, which benefits ATs generalization. Secondly, we find NL injection in outer minimization for the learning serves as regularization that alleviates robust overfitting, which benefits ATs robustness. To enhance ATs adversarial robustness, we propose NoiLIn that gradually increases underline{Noi}sy underline{L}abels underline{In}jection over the ATs training process. Empirically, NoiLIn answers the previous question negatively -- the adversarial robustness can be indeed enhanced by NL injection. Philosophically, we provide a new perspective of the learning with NL: NL should not always be deemed detrimental, and even in the absence of NL in the training set, we may consider injecting it deliberately.

تحميل البحث