تقدم الورقة تقديمنا إلى المهمة المشتركة WMT2021 بشأن تقدير الجودة (QE).نشارك في تنبؤات مستوى الجملة للأحكام البشرية وجهد ما بعد التحرير.نقترح نهج زجاجي مربع بناء على الاهتمام للأوزان المستخرجة من أنظمة الترجمة الآلية.على النقيض من الأعمال السابقة، نستكشف مباشرة مصفوفات وزن الاهتمام دون استبدالها بمقاييس عامة (مثل Entropy).نظهر أن بعض نماذجنا يمكن تدريبها بكمية صغيرة من البيانات ذات التكلفة العالية.في غياب البيانات التدريبية، لا يزال نهجنا يوضح ارتباطا خطيا معتدلا، عند تدريب البيانات الاصطناعية.
The paper presents our submission to the WMT2021 Shared Task on Quality Estimation (QE). We participate in sentence-level predictions of human judgments and post-editing effort. We propose a glass-box approach based on attention weights extracted from machine translation systems. In contrast to the previous works, we directly explore attention weight matrices without replacing them with general metrics (like entropy). We show that some of our models can be trained with a small amount of a high-cost labelled data. In the absence of training data our approach still demonstrates a moderate linear correlation, when trained with synthetic data.
References used
https://aclanthology.org/
Quality Estimation (QE) is an important component of the machine translation workflow as it assesses the quality of the translated output without consulting reference translations. In this paper, we discuss our submission to the WMT 2021 QE Shared Ta
This paper describes Papago submission to the WMT 2021 Quality Estimation Task 1: Sentence-level Direct Assessment. Our multilingual Quality Estimation system explores the combination of Pretrained Language Models and Multi-task Learning architecture
This paper presents the JHU-Microsoft joint submission for WMT 2021 quality estimation shared task. We only participate in Task 2 (post-editing effort estimation) of the shared task, focusing on the target-side word-level quality estimation. The tech
This work introduces a simple regressive ensemble for evaluating machine translation quality based on a set of novel and established metrics. We evaluate the ensemble using a correlation to expert-based MQM scores of the WMT 2021 Metrics workshop. In
Recent research questions the importance of the dot-product self-attention in Transformer models and shows that most attention heads learn simple positional patterns. In this paper, we push further in this research line and propose a novel substitute