يلعب تقدير الجودة (QE) دورا أساسيا في تطبيقات الترجمة الآلية (MT).تقليديا، يقبل نظام QE النصي المصدر الأصلي والترجمة من نظام MT مربع أسود كإدخال.في الآونة الأخيرة، تشير بعض الدراسات إلى أنه كمنتج ثانوي للترجمة، يستفيد QE من نموذج معلومات بيانات النموذج والتدريب من نظام MT حيث تأتي الترجمات، وتسمى QE الزجاجي ".في هذه الورقة، نقوم بتوسيع تعريف صندوق الزجاج QE "بشكل عام إلى كمية عدم اليقين مع حدود عدم اليقين مع كل من الأساليب السوداء والزجاج" مناهضات "وتصميم العديد من الميزات التي استنتجتها منهم لتخفيف تجربة جديدة في تحسين أداء QE.نقترح إطارا لفوست هندسة الميزة لتقدير عدم اليقين في نموذج لغة متمربا مسبقا مسبقا للتنبؤ بجودة الترجمة.تظهر نتائج التجربة أن طريقتنا تحقق أدائها الحديثة في مجموعات البيانات ذات المهمة المشتركة مع WMT 2020 QE.
Quality Estimation (QE) plays an essential role in applications of Machine Translation (MT). Traditionally, a QE system accepts the original source text and translation from a black-box MT system as input. Recently, a few studies indicate that as a by-product of translation, QE benefits from the model and training data's information of the MT system where the translations come from, and it is called the glass-box QE''. In this paper, we extend the definition of glass-box QE'' generally to uncertainty quantification with both black-box'' and glass-box'' approaches and design several features deduced from them to blaze a new trial in improving QE's performance. We propose a framework to fuse the feature engineering of uncertainty quantification into a pre-trained cross-lingual language model to predict the translation quality. Experiment results show that our method achieves state-of-the-art performances on the datasets of WMT 2020 QE shared task.
References used
https://aclanthology.org/
Quality estimation (QE) of machine translation (MT) aims to evaluate the quality of machine-translated sentences without references and is important in practical applications of MT. Training QE models require massive parallel data with hand-crafted q
Learning multilingual and multi-domain translation model is challenging as the heterogeneous and imbalanced data make the model converge inconsistently over different corpora in real world. One common practice is to adjust the share of each corpus in
Neural machine translation (NMT) models are data-driven and require large-scale training corpus. In practical applications, NMT models are usually trained on a general domain corpus and then fine-tuned by continuing training on the in-domain corpus.
Recent research questions the importance of the dot-product self-attention in Transformer models and shows that most attention heads learn simple positional patterns. In this paper, we push further in this research line and propose a novel substitute
Most current neural machine translation models adopt a monotonic decoding order of either left-to-right or right-to-left. In this work, we propose a novel method that breaks up the limitation of these decoding orders, called Smart-Start decoding. Mor