نقدم نتائج المهمة الأولى على الترجمة ذات الجهاز متعدد اللغات على نطاق واسع.تتكون المهمة على التقييم المتعدد إلى العديد من النماذج الفردية عبر مجموعة متنوعة من اللغات المصدر والمستهدفة.هذا العام، تتألف المهمة على ثلاثة إعدادات مختلفة: (1) المهمة الصغيرة 1 (لغات أوروبا الوسطى / الجنوبية الشرقية)، (2) المهمة الصغيرة 2 (لغات جنوب شرق آسيا)، و (3) مهمة كاملة (كل 101 × 100 زوج أزواج).استخدمت جميع المهام DataSet Flores-101 كمعيار التقييم.لضمان طول العمر من مجموعة البيانات، لم يتم إصدار مجموعات الاختبار علنا وتم تقييم النماذج في بيئة خاضعة للرقابة على Dynabench.كان هناك ما مجموعه 10 فرق مشاركة للمهام، بما مجموعه 151 من العروض النموذجية المتوسطة و 13 نماذج نهائية.تظهر نتائج هذا العام تحسنا كبيرا على خطوط الأساس المعروفة مع +17.8 بلو ل Task-Task2، +10.6 للمهمة الكاملة و +3.6 للمهمة الصغيرة 1.
We present the results of the first task on Large-Scale Multilingual Machine Translation. The task consists on the many-to-many evaluation of a single model across a variety of source and target languages. This year, the task consisted on three different settings: (i) SMALL-TASK1 (Central/South-Eastern European Languages), (ii) the SMALL-TASK2 (South-East Asian Languages), and (iii) FULL-TASK (all 101 x 100 language pairs). All the tasks used the FLORES-101 dataset as the evaluation benchmark. To ensure the longevity of the dataset, the test sets were not publicly released and the models were evaluated in a controlled environment on Dynabench. There were a total of 10 participating teams for the tasks, with a total of 151 intermediate model submissions and 13 final models. This year's result show a significant improvement over the known base-lines with +17.8 BLEU for SMALL-TASK2, +10.6 for FULL-TASK and +3.6 for SMALL-TASK1.
References used
https://aclanthology.org/
The machine translation efficiency task challenges participants to make their systems faster and smaller with minimal impact on translation quality. How much quality to sacrifice for efficiency depends upon the application, so participants were encou
We present our development of the multilingual machine translation system for the large-scale multilingual machine translation task at WMT 2021. Starting form the provided baseline system, we investigated several techniques to improve the translation
This paper illustrates our approach to the shared task on large-scale multilingual machine translation in the sixth conference on machine translation (WMT-21). In this work, we aim to build a single multilingual translation system with a hypothesis t
Language domains that require very careful use of terminology are abundant and reflect a significant part of the translation industry. In this work we introduce a benchmark for evaluating the quality and consistency of terminology translation, focusi
We report the results of the WMT 2021 shared task on Quality Estimation, where the challenge is to predict the quality of the output of neural machine translation systems at the word and sentence levels. This edition focused on two main novel additio