في مهام التحقق من القراءة في الجهاز، يجب على النموذج استخراج إجابة من السياق المتاح بالنظر إلى سؤال ومقطع.في الآونة الأخيرة، حققت نماذج اللغة المدربة مسبقا للمحولات أداء حديثة في العديد من مهام معالجة اللغة الطبيعية.ومع ذلك، فمن غير الواضح ما إذا كان هذا الأداء يعكس فهم اللغة الحقيقية.في هذه الورقة، نقترح أمثلة خصومة لتحقيق نموذج لغة عربية مدربة مسبقا (أرابيرت)، مما يؤدي إلى انخفاض كبير في الأداء على أربع مجموعات من مجموعات بيانات آلية قراءة آليا.نقدم تحليلا حكيما للدول الخفية للمحول لتقديم رؤى حول كيفية استكمال أسباب أرابيرت إجابة.تشير التجارب إلى أن أرابت يعتمد على الإشارات السطحية ومطابقة الكلمات الرئيسية بدلا من فهم النص.علاوة على ذلك، يوضح تصور الدولة المخفية أن أخطاء التنبؤ يمكن التعرف عليها من تمثيلات ناقلات في الطبقات السابقة.
In machine reading comprehension tasks, a model must extract an answer from the available context given a question and a passage. Recently, transformer-based pre-trained language models have achieved state-of-the-art performance in several natural language processing tasks. However, it is unclear whether such performance reflects true language understanding. In this paper, we propose adversarial examples to probe an Arabic pre-trained language model (AraBERT), leading to a significant performance drop over four Arabic machine reading comprehension datasets. We present a layer-wise analysis for the transformer's hidden states to offer insights into how AraBERT reasons to derive an answer. The experiments indicate that AraBERT relies on superficial cues and keyword matching rather than text understanding. Furthermore, hidden state visualization demonstrates that prediction errors can be recognized from vector representations in earlier layers.
References used
https://aclanthology.org/
Machine reading comprehension is a challenging task especially for querying documents with deep and interconnected contexts. Transformer-based methods have shown advanced performances on this task; however, most of them still treat documents as a fla
People convey their intention and attitude through linguistic styles of the text that they write. In this study, we investigate lexicon usages across styles throughout two lenses: human perception and machine word importance, since words differ in th
Adversarial training (AT) as a regularization method has proved its effectiveness on various tasks. Though there are successful applications of AT on some NLP tasks, the distinguishing characteristics of NLP tasks have not been exploited. In this pap
Machine reading comprehension (MRC) is one of the most challenging tasks in natural language processing domain. Recent state-of-the-art results for MRC have been achieved with the pre-trained language models, such as BERT and its modifications. Despi
Despite achieving encouraging results, neural Referring Expression Generation models are often thought to lack transparency. We probed neural Referential Form Selection (RFS) models to find out to what extent the linguistic features influencing the R