بالمقارنة مع نماذج أحادية الأجل، تتطلب النماذج عبر اللغات عادة مفردات أكثر تعبيرية لتمثيل جميع اللغات بشكل كاف.نجد أن العديد من اللغات ممثلة تمثيلا ناقصا في نماذج اللغات الصليب الأخيرة بسبب قدرة المفردات المحدودة.تحقيقا لهذه الغاية، نقترح خوارزمية VOCAP لتحديد سعة المفردات المطلوبة لكل لغة.ومع ذلك، فإن زيادة حجم المفردات يبطئ بشكل كبير بسرعة ما قبل التدريب.من أجل معالجة المشكلات، نقترح أخذ العينات المستهدفة المستهدفة K-NN لتسريع SoftMax باهظة الثمن.تبين تجاربنا أن المفردات المتعددة اللغات المستفادة مع فوائد VOCAP نموذج اللغة المتبادلة قبل التدريب مسبقا.علاوة على ذلك، فإن أخذ العينات المستهدفة المستندة إلى K-NN تخفف الآثار الجانبية لزيادة حجم المفردات مع تحقيق أداء مماثل وسرعة ما قبل التدريب الأسرع.الرمز والمفردات متعددة اللغات المحددة متوفرة في https://github.com/bozheng-hit/vocapxlm.
Compared to monolingual models, cross-lingual models usually require a more expressive vocabulary to represent all languages adequately. We find that many languages are under-represented in recent cross-lingual language models due to the limited vocabulary capacity. To this end, we propose an algorithm VoCap to determine the desired vocabulary capacity of each language. However, increasing the vocabulary size significantly slows down the pre-training speed. In order to address the issues, we propose k-NN-based target sampling to accelerate the expensive softmax. Our experiments show that the multilingual vocabulary learned with VoCap benefits cross-lingual language model pre-training. Moreover, k-NN-based target sampling mitigates the side-effects of increasing the vocabulary size while achieving comparable performance and faster pre-training speed. The code and the pretrained multilingual vocabularies are available at https://github.com/bozheng-hit/VoCapXLM.
References used
https://aclanthology.org/
In this work, we present an information-theoretic framework that formulates cross-lingual language model pre-training as maximizing mutual information between multilingual-multi-granularity texts. The unified view helps us to better understand the ex
Multilingual pre-trained models have achieved remarkable performance on cross-lingual transfer learning. Some multilingual models such as mBERT, have been pre-trained on unlabeled corpora, therefore the embeddings of different languages in the models
Dense retrieval has shown great success for passage ranking in English. However, its effectiveness for non-English languages remains unexplored due to limitation in training resources. In this work, we explore different transfer techniques for docume
This paper studies zero-shot cross-lingual transfer of vision-language models. Specifically, we focus on multilingual text-to-video search and propose a Transformer-based model that learns contextual multilingual multimodal embeddings. Under a zero-s
Pre-trained multilingual language encoders, such as multilingual BERT and XLM-R, show great potential for zero-shot cross-lingual transfer. However, these multilingual encoders do not precisely align words and phrases across languages. Especially, le