حجم المفردات عبارة عن خيار تصميم مركزي في نماذج اللغة المحددة مسبقا كبيرة، فيما يتعلق بمتطلبات الأداء والذاكرة.عادة، يتم استخدام خوارزميات تكتيح الكلمات الفرعية مثل ترميز زوج البايت والصفحة.في هذا العمل، نحقق في توافق التوصيلات الخاصة بمساحات التضمين الثابتة والسياق متعددة اللغات واقتراح تدبير يعكس توافق التوصيلات عبر اللغات.هدفنا هو منع التوصيلات غير المتوافقة، على سبيل المثال، النبيذ "(مستوى الكلمات) باللغة الإنجليزية مقابل V. (مستوى الحرف) باللغة الفرنسية، مما يجعل من الصعب تعلم تمثيلات دلالية جيدة متعددة اللغات.نظهر أن تدبير التوافق لدينا يسمح بمصمم النظام بإنشاء مفدين عبر اللغات المتوافقة - Desideratum الذي تم إهماله حتى الآن في نماذج متعددة اللغات.
The size of the vocabulary is a central design choice in large pretrained language models, with respect to both performance and memory requirements. Typically, subword tokenization algorithms such as byte pair encoding and WordPiece are used. In this work, we investigate the compatibility of tokenizations for multilingual static and contextualized embedding spaces and propose a measure that reflects the compatibility of tokenizations across languages. Our goal is to prevent incompatible tokenizations, e.g., wine'' (word-level) in English vs. v i n'' (character-level) in French, which make it hard to learn good multilingual semantic representations. We show that our compatibility measure allows the system designer to create vocabularies across languages that are compatible -- a desideratum that so far has been neglected in multilingual models.
References used
https://aclanthology.org/
How would you explain Bill Gates to a German? He is associated with founding a company in the United States, so perhaps the German founder Carl Benz could stand in for Gates in those contexts. This type of translation is called adaptation in the tran
State-of-the-art multilingual systems rely on shared vocabularies that sufficiently cover all considered languages. To this end, a simple and frequently used approach makes use of subword vocabularies constructed jointly over several languages. We hy
Neural Machine Translation (NMT) models have been observed to produce poor translations when there are few/no parallel sentences to train the models. In the absence of parallel data, several approaches have turned to the use of images to learn transl
Word embeddings are powerful representations that form the foundation of many natural language processing architectures, both in English and in other languages. To gain further insight into word embeddings, we explore their stability (e.g., overlap b
Natural Language Processing (NLP) systems are at the heart of many critical automated decision-making systems making crucial recommendations about our future world. Gender bias in NLP has been well studied in English, but has been less studied in oth