Improving Gradient-based Adversarial Training for Text Classification by Contrastive Learning and Auto-Encoder


Abstract in English

Recent work has proposed several efficient approaches for generating gradient-based adversarial perturbations on embeddings and proved that the models performance and robustness can be improved when they are trained with these contaminated embeddings. While they paid little attention to how to help the model to learn these adversarial samples more efficiently. In this work, we focus on enhancing the models ability to defend gradient-based adversarial attack during the models training process and propose two novel adversarial training approaches: (1) CARL narrows the original sample and its adversarial sample in the representation space while enlarging their distance from different labeled samples. (2) RAR forces the model to reconstruct the original sample from its adversarial representation. Experiments show that the proposed two approaches outperform strong baselines on various text classification datasets. Analysis experiments find that when using our approaches, the semantic representation of the input sentence wont be significantly affected by adversarial perturbations, and the models performance drops less under adversarial attack. That is to say, our approaches can effectively improve the robustness of the model. Besides, RAR can also be used to generate text-form adversarial samples.

Download