في هذا العمل، نقدم إطارا نظريا للمعلومات يقوم بتصوير نموذج اللغة عبر اللغات قبل تعظيم المعلومات المتبادلة بين النصوص متعددة اللغات متعددة التحبيب.العرض الموحد يساعدنا على فهم الأساليب الموجودة بشكل أفضل لتعلم تمثيلات عبر اللغات.الأهم من ذلك، مستوحاة من الإطار، نقترح مهمة جديدة قبل التدريب على التعلم المتعاقل.على وجه التحديد، نعتبر زوج جملة ثنائية اللغة كأراضتين لنفس المعنى وتشجيع تمثيلاتها المشفرة أكثر مماثلة من الأمثلة السلبية.من خلال الاستفادة من كل من Corpora Monolingual والمتوازي، فإننا ندرب بشكل مشترك مهام ذريعة التحسين القدرة على التحويل المتبادلة للنماذج المدربة مسبقا.النتائج التجريبية على العديد من المعايير تظهر أن نهجنا يحقق أداء أفضل بكثير.تتوفر الكود والنماذج المدربة مسبقا في https://aka.ms/infoxlm.
In this work, we present an information-theoretic framework that formulates cross-lingual language model pre-training as maximizing mutual information between multilingual-multi-granularity texts. The unified view helps us to better understand the existing methods for learning cross-lingual representations. More importantly, inspired by the framework, we propose a new pre-training task based on contrastive learning. Specifically, we regard a bilingual sentence pair as two views of the same meaning and encourage their encoded representations to be more similar than the negative examples. By leveraging both monolingual and parallel corpora, we jointly train the pretext tasks to improve the cross-lingual transferability of pre-trained models. Experimental results on several benchmarks show that our approach achieves considerably better performance. The code and pre-trained models are available at https://aka.ms/infoxlm.
References used
https://aclanthology.org/
Compared to monolingual models, cross-lingual models usually require a more expressive vocabulary to represent all languages adequately. We find that many languages are under-represented in recent cross-lingual language models due to the limited voca
Pimentel et al. (2020) recently analysed probing from an information-theoretic perspective. They argue that probing should be seen as approximating a mutual information. This led to the rather unintuitive conclusion that representations encode exactl
Multilingual pre-trained models have achieved remarkable performance on cross-lingual transfer learning. Some multilingual models such as mBERT, have been pre-trained on unlabeled corpora, therefore the embeddings of different languages in the models
Prior work on Data-To-Text Generation, the task of converting knowledge graph (KG) triples into natural text, focused on domain-specific benchmark datasets. In this paper, however, we verbalize the entire English Wikidata KG, and discuss the unique c
Many recent works use consistency regularisation' to improve the generalisation of fine-tuned pre-trained models, both multilingual and English-only. These works encourage model outputs to be similar between a perturbed and normal version of the inpu