لقد أثبتت تدريب الخصم (AT) كطريقة تنظيمي فعاليتها على المهام المختلفة.على الرغم من وجود تطبيقات ناجحة في بعض مهام NLP، إلا أن الخصائص المميزة لمهام NLP لم يتم استغلالها.في هذه الورقة، نهدف إلى تطبيق مهام فهم القراءة (MRC).علاوة على ذلك، فإننا نتكيف مع مهام MRC من خلال اقتراح طريقة تدريبية عديدة جديدة تسمى PQAT التي تتعلق بمصفوفة التضمين بدلا من ناقلات Word.للتمييز بين أدوار الممرات والأسئلة، يستخدم PQAT مصفوفات إضافية P / Q Directding إضافية لجمع الاضطرابات العالمية للكلمات من الممرات والأسئلة بشكل منفصل.نحن نختبر الطريقة على مجموعة واسعة من مهام MRC، بما في ذلك RC الاستخراجية المستندة إلى RC واستخراج RC متعددة الخيارات.تظهر النتائج أن التدريب الخصم فعال عالميا، ويحسن PQAT الأداء.
Adversarial training (AT) as a regularization method has proved its effectiveness on various tasks. Though there are successful applications of AT on some NLP tasks, the distinguishing characteristics of NLP tasks have not been exploited. In this paper, we aim to apply AT on machine reading comprehension (MRC) tasks. Furthermore, we adapt AT for MRC tasks by proposing a novel adversarial training method called PQAT that perturbs the embedding matrix instead of word vectors. To differentiate the roles of passages and questions, PQAT uses additional virtual P/Q-embedding matrices to gather the global perturbations of words from passages and questions separately. We test the method on a wide range of MRC tasks, including span-based extractive RC and multiple-choice RC. The results show that adversarial training is effective universally, and PQAT further improves the performance.
References used
https://aclanthology.org/
Machine reading comprehension is a challenging task especially for querying documents with deep and interconnected contexts. Transformer-based methods have shown advanced performances on this task; however, most of them still treat documents as a fla
We present a simple yet effective Targeted Adversarial Training (TAT) algorithm to improve adversarial training for natural language understanding. The key idea is to introspect current mistakes and prioritize adversarial training steps to where the
Transformer-based pre-trained models, such as BERT, have achieved remarkable results on machine reading comprehension. However, due to the constraint of encoding length (e.g., 512 WordPiece tokens), a long document is usually split into multiple chun
Adversarial training, a method for learning robust deep neural networks, constructs adversarial examples during training. However, recent methods for generating NLP adversarial examples involve combinatorial search and expensive sentence encoders for
Machine reading comprehension (MRC) is one of the most challenging tasks in natural language processing domain. Recent state-of-the-art results for MRC have been achieved with the pre-trained language models, such as BERT and its modifications. Despi