تم إظهار النماذج المحددة متعددة اللغات مسبقا، مثل XLM-Roberta (XLM-R) فعالة في العديد من المهام المتقاطعة عبر اللغات.ومع ذلك، لا تزال هناك فجوات بين التمثيلات السياقية للكلمات المماثلة بلغات مختلفة.لحل هذه المشكلة، نقترح إطارا جديدا يدعى تدريبات متعددة اللغات المختلطة (MVMLT)، والتي ترفع البيانات التي تبديل التعليمات البرمجية مع التعلم المتعدد المشاهدة لحن XLM-R.يستخدم MVMLT الرافية القائمة على التدرج لاستخراج الكلمات الرئيسية التي تعد الأكثر صلة بمهام المصب واستبدالها بالكلمات المقابلة في اللغة المستهدفة بشكل حيوي.علاوة على ذلك.تبين تجارب واسعة مزودة بأربع لغات أن نموذجنا يحقق نتائج أحدث النتائج على تصنيف المعنويات الصفرية ومهام تتبع الدولة للحوار، مما يدل على فعالية نموذجنا المقترح.
Recent multilingual pre-trained models, like XLM-RoBERTa (XLM-R), have been demonstrated effective in many cross-lingual tasks. However, there are still gaps between the contextualized representations of similar words in different languages. To solve this problem, we propose a novel framework named Multi-View Mixed Language Training (MVMLT), which leverages code-switched data with multi-view learning to fine-tune XLM-R. MVMLT uses gradient-based saliency to extract keywords which are the most relevant to downstream tasks and replaces them with the corresponding words in the target language dynamically. Furthermore, MVMLT utilizes multi-view learning to encourage contextualized embeddings to align into a more refined language-invariant space. Extensive experiments with four languages show that our model achieves state-of-the-art results on zero-shot cross-lingual sentiment classification and dialogue state tracking tasks, demonstrating the effectiveness of our proposed model.
References used
https://aclanthology.org/
Multilingual pre-trained models have achieved remarkable performance on cross-lingual transfer learning. Some multilingual models such as mBERT, have been pre-trained on unlabeled corpora, therefore the embeddings of different languages in the models
Pre-trained multilingual language encoders, such as multilingual BERT and XLM-R, show great potential for zero-shot cross-lingual transfer. However, these multilingual encoders do not precisely align words and phrases across languages. Especially, le
Multilingual question answering over knowledge graph (KGQA) aims to derive answers from a knowledge graph (KG) for questions in multiple languages. To be widely applicable, we focus on its zero-shot transfer setting. That is, we can only access train
We introduce MULTI-EURLEX, a new multilingual dataset for topic classification of legal documents. The dataset comprises 65k European Union (EU) laws, officially translated in 23 languages, annotated with multiple labels from the EUROVOC taxonomy. We
This paper studies zero-shot cross-lingual transfer of vision-language models. Specifically, we focus on multilingual text-to-video search and propose a Transformer-based model that learns contextual multilingual multimodal embeddings. Under a zero-s