اعتمدت الأبحاث الحديثة حقل تجريبي جديد يترکن حول مفهوم اضطرابات النصوص التي كشفت عن ترتيب الكلمات الخلفي ليس لها تأثير كبير على أداء نماذج اللغة القائمة على المحولات في العديد من مهام NLP. تتناقض هذه النتائج بالفهم المشترك لكيفية تشفير النماذج من المعلومات الهرمية والهيكلية وحتى السؤال إذا تم تصميم أمر Word مع Adgeddings الموضع. تحقيقا لهذه الغاية، تقترح هذه الورقة تسع مجموعات بيانات للتحقيق التي تنظمها نوع اضطراب النص الذي يمكن السيطرة عليه لثلاثة لغات داخلية من الهند مع درجة متفاوتة من مرونة ترتيب الكلمات: الإنجليزية والسويدية والروسية. استنادا إلى تحليل التحقيق لنماذج M-Bert و M-Bart، نبلغ أن الحساسية النحوية تعتمد على أهداف اللغة والنموذج قبل التدريب. نجد أيضا أن الحساسية تنمو عبر الطبقات مع زيادة حبيبات الاضطراب. أخيرا وليس آخرا، نعرض أن النماذج بالكاد تستخدم المعلومات الموضعية لتحفيز الأشجار النحوية من تمثيلها الذاتي المتوسطة والتعويضات السياقية.
Recent research has adopted a new experimental field centered around the concept of text perturbations which has revealed that shuffled word order has little to no impact on the downstream performance of Transformer-based language models across many NLP tasks. These findings contradict the common understanding of how the models encode hierarchical and structural information and even question if the word order is modeled with position embeddings. To this end, this paper proposes nine probing datasets organized by the type of controllable text perturbation for three Indo-European languages with a varying degree of word order flexibility: English, Swedish and Russian. Based on the probing analysis of the M-BERT and M-BART models, we report that the syntactic sensitivity depends on the language and model pre-training objectives. We also find that the sensitivity grows across layers together with the increase of the perturbation granularity. Last but not least, we show that the models barely use the positional information to induce syntactic trees from their intermediate self-attention and contextualized representations.
References used
https://aclanthology.org/
Recently, a large pre-trained language model called T5 (A Unified Text-to-Text Transfer Transformer) has achieved state-of-the-art performance in many NLP tasks. However, no study has been found using this pre-trained model on Text Simplification. Th
This paper describes the HEL-LJU submissions to the MultiLexNorm shared task on multilingual lexical normalization. Our system is based on a BERT token classification preprocessing step, where for each token the type of the necessary transformation i
Pre-trained multilingual language models have become an important building block in multilingual Natural Language Processing. In the present paper, we investigate a range of such models to find out how well they transfer discourse-level knowledge acr
The outstanding performance of transformer-based language models on a great variety of NLP and NLU tasks has stimulated interest in exploration of their inner workings. Recent research has been primarily focused on higher-level and complex linguistic
We study multilingual AMR parsing from the perspective of knowledge distillation, where the aim is to learn and improve a multilingual AMR parser by using an existing English parser as its teacher. We constrain our exploration in a strict multilingua