نماذج NLP عرضة لهجمات تسمم البيانات.يمكن لنوع واحد من الهجوم زرع الأبعاد في نموذج عن طريق حقن الأمثلة المسمولة في التدريب، مما تسبب في نموذج الضحية لإضاءة مثيلات الاختبار التي تتضمن نمطا محددا.على الرغم من أن الدفاعات موجودة لمواجهة هذه الهجمات، فهي محددة لنوع هجوم أو نمط.في هذه الورقة، نقترح آلية دفاعية عامة من خلال جعل عملية التدريب قوية للتسمم بالهجمات من خلال طرق تشكيل التدرج، بناء على التدريب الخاص بشكل مختلف.نظهر أن طريقتنا فعالة للغاية في التخفيف، أو حتى القضاء على الهجمات التسمم على تصنيف النص، مع تكلفة صغيرة فقط في دقة التنبؤية.
NLP models are vulnerable to data poisoning attacks. One type of attack can plant a backdoor in a model by injecting poisoned examples in training, causing the victim model to misclassify test instances which include a specific pattern. Although defences exist to counter these attacks, they are specific to an attack type or pattern. In this paper, we propose a generic defence mechanism by making the training process robust to poisoning attacks through gradient shaping methods, based on differentially private training. We show that our method is highly effective in mitigating, or even eliminating, poisoning attacks on text classification, with only a small cost in predictive accuracy.
References used
https://aclanthology.org/
The robustness and security of natural language processing (NLP) models are significantly important in real-world applications. In the context of text classification tasks, adversarial examples can be designed by substituting words with synonyms unde
Latent Dirichlet allocation (LDA), a widely used topic model, is often employed as a fundamental tool for text analysis in various applications. However, the training process of the LDA model typically requires massive text corpus data. On one hand,
Adversarial attacks alter NLP model predictions by perturbing test-time inputs. However, it is much less understood whether, and how, predictions can be manipulated with small, concealed changes to the training data. In this work, we develop a new da
Existing text classification methods mainly focus on a fixed label set, whereas many real-world applications require extending to new fine-grained classes as the number of samples per label increases. To accommodate such requirements, we introduce a
Text classifiers are regularly applied to personal texts, leaving users of these classifiers vulnerable to privacy breaches. We propose a solution for privacy-preserving text classification that is based on Convolutional Neural Networks (CNNs) and Se