اقترح مجتمع البحث تعديلات غزيرة على بنية المحولات منذ تقديمها منذ أكثر من ثلاث سنوات، قلة قليلة نسبيا والتي شهدت اعتماد واسع النطاق.في هذه الورقة، نقوم بتقييم العديد من هذه التعديلات بشكل شامل في بيئة تجريبية مشتركة تغطي معظم الاستخدامات المشتركة للمحول في معالجة اللغة الطبيعية.من المستغرب، نجد أن معظم التعديلات لا تحسن بشكل مفيد الأداء.علاوة على ذلك، تم تطوير معظم المتغيرات المحولات التي وجدناها مفيدة في نفس الكود التي استخدمناها أو أنها تغييرات طفيفة نسبيا.نحن نقم التخمين أن تحسينات الأداء قد تعتمد بشدة على تفاصيل التنفيذ وتقديم بعض التوصيات في المقابل لتحسين عمومية النتائج التجريبية.
The research community has proposed copious modifications to the Transformer architecture since it was introduced over three years ago, relatively few of which have seen widespread adoption. In this paper, we comprehensively evaluate many of these modifications in a shared experimental setting that covers most of the common uses of the Transformer in natural language processing. Surprisingly, we find that most modifications do not meaningfully improve performance. Furthermore, most of the Transformer variants we found beneficial were either developed in the same codebase that we used or are relatively minor changes. We conjecture that performance improvements may strongly depend on implementation details and correspondingly make some recommendations for improving the generality of experimental results.
References used
https://aclanthology.org/
Pretraining techniques leveraging enormous datasets have driven recent advances in text summarization. While folk explanations suggest that knowledge transfer accounts for pretraining's benefits, little is known about why it works or what makes a pre
Pre-trained multilingual language encoders, such as multilingual BERT and XLM-R, show great potential for zero-shot cross-lingual transfer. However, these multilingual encoders do not precisely align words and phrases across languages. Especially, le
The success of language models based on the Transformer architecture appears to be inconsistent with observed anisotropic properties of representations learned by such models. We resolve this by showing, contrary to previous studies, that the represe
In order to improve the performance of transformers in highvoltage
networks and re-electrical characteristics of insulation
materials for these transformers to the nominal accepted value,
methods of technical and new process are appeared to re-qua
Previous work on crosslingual Relation and Event Extraction (REE) suffers from the monolingual bias issue due to the training of models on only the source language data. An approach to overcome this issue is to use unlabeled data in the target langua