Do you want to publish a course? Click here

The Impact of Positional Encodings on Multilingual Compression

تأثير الترميزات الموضعية على ضغط متعدد اللغات

143   0   0   0.0 ( 0 )
 Publication date 2021
and research's language is English
 Created by Shamra Editor




Ask ChatGPT about the research

In order to preserve word-order information in a non-autoregressive setting, transformer architectures tend to include positional knowledge, by (for instance) adding positional encodings to token embeddings. Several modifications have been proposed over the sinusoidal positional encodings used in the original transformer architecture; these include, for instance, separating position encodings and token embeddings, or directly modifying attention weights based on the distance between word pairs. We first show that surprisingly, while these modifications tend to improve monolingual language models, none of them result in better multilingual language models. We then answer why that is: sinusoidal encodings were explicitly designed to facilitate compositionality by allowing linear projections over arbitrary time steps. Higher variances in multilingual training distributions requires higher compression, in which case, compositionality becomes indispensable. Learned absolute positional encodings (e.g., in mBERT) tend to approximate sinusoidal embeddings in multilingual settings, but more complex positional encoding architectures lack the inductive bias to effectively learn cross-lingual alignment. In other words, while sinusoidal positional encodings were designed for monolingual applications, they are particularly useful in multilingual language models.

References used
https://aclanthology.org/
rate research

Read More

Lexical normalization is the task of transforming an utterance into its standardized form. This task is beneficial for downstream analysis, as it provides a way to harmonize (often spontaneous) linguistic variation. Such variation is typical for soci al media on which information is shared in a multitude of ways, including diverse languages and code-switching. Since the seminal work of Han and Baldwin (2011) a decade ago, lexical normalization has attracted attention in English and multiple other languages. However, there exists a lack of a common benchmark for comparison of systems across languages with a homogeneous data and evaluation setup. The MultiLexNorm shared task sets out to fill this gap. We provide the largest publicly available multilingual lexical normalization benchmark including 13 language variants. We propose a homogenized evaluation setup with both intrinsic and extrinsic evaluation. As extrinsic evaluation, we use dependency parsing and part-of-speech tagging with adapted evaluation metrics (a-LAS, a-UAS, and a-POS) to account for alignment discrepancies. The shared task hosted at W-NUT 2021 attracted 9 participants and 18 submissions. The results show that neural normalization systems outperform the previous state-of-the-art system by a large margin. Downstream parsing and part-of-speech tagging performance is positively affected but to varying degrees, with improvements of up to 1.72 a-LAS, 0.85 a-UAS, and 1.54 a-POS for the winning system.
خلال العقد الأخير من القرن العشرين ظهرت مجموعة من المتغيرات التكنولوجية المتقدمة في مجالات نظم المعلومات المرتبطة بالحاسبات الآلية و وسائل الاتصال و ضغط البيانات و نقلها عبر شبكات الحاسب الآلي. حيث انتقلت نظم المعلومات من اعتمادها على النص و بعض الرس ومات البيانية البسيطة إلى اعتمادها على استخدام الوسائط المتعددة التي تعمل على توصيل المعلومات في أشكال مختلفة من خلال ترابط و تكامل مجموعة متباينة من التكنولوجيات المختلفة (الصوت, الصور, النص, الفيديو, ..الخ). و قد كان تطور تلك النظم في البداية مقصوراً على الاستخدام المنفرد, و لكن نظراً لأهمية نظم الاتصالات و تطور شبكة الانترنت و استخدام نظم الوسائط المتعددة من قبل مستخدمين متعددين في أماكن مختلفة من حيث الموقع الجغرافي, ظهرت أهمية المشاركة في بيانات الوسائط المتعددة, و بالتالي حتمية تداولها من خلال شبكات الحاسب الآلي. و من هنا ظهرت الحاجة إلى ظهور شبكات ذات مواصفات خاصة يمكنها التعامل مع عناصر الوسائط المتعددة بكفاءة عالية. و من جانب آخر ظهرت أهمية وجود نظم وسائط متعددة لديها القدرة على التعامل مع شبكات الحاسب الآلي. من ذلك نرى بأن هذه النظم سوف تتسم بكبر حجم بياناتها إضافة إلى الصعوبة الحقيقية في نقل هذه البيانات و خاصة عبر شبكات الحاسب. لذلك فقد دعت مشاكل تخزين أحجام كبيرة من البيانات مقارنة مع صغر سعة الأجهزة التخزينية و مشاكل نقل كميات كبيرة منها عبر الشبكات إلى تطوير تقنيات لتخفيض (اختصار) أحجام البيانات قدر الإمكان مما يساعد على توفير في المساحات التخزينية من جهة و توفير الوقت عند إرسال البيانات من جهة ثانية
We present the results of the first task on Large-Scale Multilingual Machine Translation. The task consists on the many-to-many evaluation of a single model across a variety of source and target languages. This year, the task consisted on three diffe rent settings: (i) SMALL-TASK1 (Central/South-Eastern European Languages), (ii) the SMALL-TASK2 (South-East Asian Languages), and (iii) FULL-TASK (all 101 x 100 language pairs). All the tasks used the FLORES-101 dataset as the evaluation benchmark. To ensure the longevity of the dataset, the test sets were not publicly released and the models were evaluated in a controlled environment on Dynabench. There were a total of 10 participating teams for the tasks, with a total of 151 intermediate model submissions and 13 final models. This year's result show a significant improvement over the known base-lines with +17.8 BLEU for SMALL-TASK2, +10.6 for FULL-TASK and +3.6 for SMALL-TASK1.
India is known as the land of many tongues and dialects. Neural machine translation (NMT) is the current state-of-the-art approach for machine translation (MT) but performs better only with large datasets which Indian languages usually lack, making t his approach infeasible. So, in this paper, we address the problem of data scarcity by efficiently training multilingual and multilingual multi domain NMT systems involving languages of the ?????? ????????????. We are proposing the technique for using the joint domain and language tags in a multilingual setup. We draw three major conclusions from our experiments: (i) Training a multilingual system via exploiting lexical similarity based on language family helps in achieving an overall average improvement of ?.?? ???? ?????? over bilingual baselines, (ii) Technique of incorporating domain information into the language tokens helps multilingual multi-domain system in getting a significant average improvement of ? ???? ?????? over the baselines, (iii) Multistage fine-tuning further helps in getting an improvement of ?-?.? ???? ?????? for the language pair of interest.
India is one of the richest language hubs on the earth and is very diverse and multilingual. But apart from a few Indian languages, most of them are still considered to be resource poor. Since most of the NLP techniques either require linguistic know ledge that can only be developed by experts and native speakers of that language or they require a lot of labelled data which is again expensive to generate, the task of text classification becomes challenging for most of the Indian languages. The main objective of this paper is to see how one can benefit from the lexical similarity found in Indian languages in a multilingual scenario. Can a classification model trained on one Indian language be reused for other Indian languages? So, we performed zero-shot text classification via exploiting lexical similarity and we observed that our model performs best in those cases where the vocabulary overlap between the language datasets is maximum. Our experiments also confirm that a single multilingual model trained via exploiting language relatedness outperforms the baselines by significant margins.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا