تعرض أمثلة الخصومة نقاط الضعف في نماذج معالجة اللغة الطبيعية (NLP)، ويمكن استخدامها لتقييم وتحسين متواضتهم. عادة ما تكون التقنيات الحالية لتوليد هذه الأمثلة تحركها القواعد المتخذة المحلية غير الملأمة في السياق، وغالبا ما تؤدي إلى مخرجات غير طبيعية وغير طبيعية. تقدم هذه الورقة كلير، وهو نموذج توليد مثال لمصدري محوري ينتج مخرجات بطلاقة وحكومية من خلال إجراءات قناع ثم تسلل. بناء Clare على نموذج لغة ملثم مسبقا مسبقا وتعديل المدخلات بطريقة تدرك السياق. نقترح ثلاث اضطرابات سياق، واستبدال وإدراج ودمج، والتي تسمح بتوليد مخرجات أطوال متنوعة. يمكن أن تجمع كلير بمرونة هذه الاضطرابات وتطبيقها في أي موقف في المدخلات، وبالتالي فهي قادرة على مهاجمة نموذج الضحية بشكل أكثر فعالية مع تعديلات أقل. توضح التجارب الواسعة والتقييم البشري أن كلير تتفوق على خطوط الأساس من حيث معدل النجاح الهجوم، والتشابه النصي والطلاقة والنحوية.
Adversarial examples expose the vulnerabilities of natural language processing (NLP) models, and can be used to evaluate and improve their robustness. Existing techniques of generating such examples are typically driven by local heuristic rules that are agnostic to the context, often resulting in unnatural and ungrammatical outputs. This paper presents CLARE, a ContextuaLized AdversaRial Example generation model that produces fluent and grammatical outputs through a mask-then-infill procedure. CLARE builds on a pre-trained masked language model and modifies the inputs in a context-aware manner. We propose three contextualized perturbations, Replace, Insert and Merge, that allow for generating outputs of varied lengths. CLARE can flexibly combine these perturbations and apply them at any position in the inputs, and is thus able to attack the victim model more effectively with fewer edits. Extensive experiments and human evaluation demonstrate that CLARE outperforms the baselines in terms of attack success rate, textual similarity, fluency and grammaticality.
References used
https://aclanthology.org/
Recently, the textual adversarial attack models become increasingly popular due to their successful in estimating the robustness of NLP models. However, existing works have obvious deficiencies. (1)They usually consider only a single granularity of m
Representation learning is widely used in NLP for a vast range of tasks. However, representations derived from text corpora often reflect social biases. This phenomenon is pervasive and consistent across different neural models, causing serious conce
The robustness and security of natural language processing (NLP) models are significantly important in real-world applications. In the context of text classification tasks, adversarial examples can be designed by substituting words with synonyms unde
We present a simple yet effective Targeted Adversarial Training (TAT) algorithm to improve adversarial training for natural language understanding. The key idea is to introspect current mistakes and prioritize adversarial training steps to where the
This paper describes a compact and effective model for low-latency passage retrieval in conversational search based on learned dense representations. Prior to our work, the state-of-the-art approach uses a multi-stage pipeline comprising conversation