حققت نماذج التضمين السياقية المدربة مسبقا متعددة اللغات (Devlin et al.، 2019) أداء مثير للإعجاب على مهام نقل اللغات الصفرية.من خلال إيجاد استراتيجية ضبط الدقيقة الأكثر فعالية لضبط هذه النماذج على لغات الموارد عالية الموارد بحيث تقوم بتحويلاتها جيدا لغات اللغات الصفرية هي مهمة غير تافهة.في هذه الورقة، نقترح رواية ميتا المحسن إلى طبقات ناعمة في طبقات النموذج المدرب مسبقا لتجميدها أثناء الضبط.نحن ندرب ميتا المحسن عن طريق محاكاة سيناريو نقل الصفر بالرصاص.تشير النتائج على الاستدلال اللغوي المتبادل اللغوي إلى أن نهجنا يحسن على خط الأساس البسيط للضبط و X-Maml (Nooralahzadeh et al.، 2020).
Multilingual pre-trained contextual embedding models (Devlin et al., 2019) have achieved impressive performance on zero-shot cross-lingual transfer tasks. Finding the most effective fine-tuning strategy to fine-tune these models on high-resource languages so that it transfers well to the zero-shot languages is a non-trivial task. In this paper, we propose a novel meta-optimizer to soft-select which layers of the pre-trained model to freeze during fine-tuning. We train the meta-optimizer by simulating the zero-shot transfer scenario. Results on cross-lingual natural language inference show that our approach improves over the simple fine-tuning baseline and X-MAML (Nooralahzadeh et al., 2020).
References used
https://aclanthology.org/
Multilingual pre-trained models have achieved remarkable performance on cross-lingual transfer learning. Some multilingual models such as mBERT, have been pre-trained on unlabeled corpora, therefore the embeddings of different languages in the models
Pre-trained multilingual language encoders, such as multilingual BERT and XLM-R, show great potential for zero-shot cross-lingual transfer. However, these multilingual encoders do not precisely align words and phrases across languages. Especially, le
Transformers that are pre-trained on multilingual corpora, such as, mBERT and XLM-RoBERTa, have achieved impressive cross-lingual transfer capabilities. In the zero-shot transfer setting, only English training data is used, and the fine-tuned model i
We introduce MULTI-EURLEX, a new multilingual dataset for topic classification of legal documents. The dataset comprises 65k European Union (EU) laws, officially translated in 23 languages, annotated with multiple labels from the EUROVOC taxonomy. We
Multilingual question answering over knowledge graph (KGQA) aims to derive answers from a knowledge graph (KG) for questions in multiple languages. To be widely applicable, we focus on its zero-shot transfer setting. That is, we can only access train