نبلغ عن نتائج المهمة المشتركة WMT 2021 بشأن تقدير الجودة، حيث يتحدى التحدي هو التنبؤ بجودة إخراج أنظمة الترجمة الآلية العصبية على مستوى الكلمة ومستويات الجملة.ركزت هذه الطبعة على إضافات رواية رئيسيتين: (1) التنبؤ باللغات غير المرئية، أي إعدادات صفرية، و (2) التنبؤ بالأحكام ذات الأخطاء الكارثية.بالإضافة إلى ذلك، تم إصدار بيانات جديدة لعدة من اللغات، وخاصة البيانات التي تم تحريرها بعد التحرير.قدمت الفرق المشاركة من 19 مؤسسة تماما 1263 أنظمة لمتغيرات المهام المختلفة وأزواج اللغة.
We report the results of the WMT 2021 shared task on Quality Estimation, where the challenge is to predict the quality of the output of neural machine translation systems at the word and sentence levels. This edition focused on two main novel additions: (i) prediction for unseen languages, i.e. zero-shot settings, and (ii) prediction of sentences with catastrophic errors. In addition, new data was released for a number of languages, especially post-edited data. Participating teams from 19 institutions submitted altogether 1263 systems to different task variants and language pairs.
References used
https://aclanthology.org/
The machine translation efficiency task challenges participants to make their systems faster and smaller with minimal impact on translation quality. How much quality to sacrifice for efficiency depends upon the application, so participants were encou
This paper presents our work in WMT 2021 Quality Estimation (QE) Shared Task. We participated in all of the three sub-tasks, including Sentence-Level Direct Assessment (DA) task, Word and Sentence-Level Post-editing Effort task and Critical Error Det
We present the results of the first task on Large-Scale Multilingual Machine Translation. The task consists on the many-to-many evaluation of a single model across a variety of source and target languages. This year, the task consisted on three diffe
We present the joint contribution of IST and Unbabel to the WMT 2021 Shared Task on Quality Estimation. Our team participated on two tasks: Direct Assessment and Post-Editing Effort, encompassing a total of 35 submissions. For all submissions, our ef
Language domains that require very careful use of terminology are abundant and reflect a significant part of the translation industry. In this work we introduce a benchmark for evaluating the quality and consistency of terminology translation, focusi