تقدم هذه الورقة ترميز تصحيح ذاتي (SECOCO)، وهو إطار يتعامل بشكل فعال مع المدخلات الصاخبة للترجمة الآلية العصبية القوية عن طريق إدخال تنبؤ تصحيح ذاتي.تختلف عن الأساليب القوية السابقة، تمكن SECOCO NMT من تصحيح المدخلات الصاخبة بشكل صريح وحذف أخطاء محددة في وقت واحد مع عملية فك تشفير الترجمة.SECOCO قادرة على تحقيق تحسينات كبيرة على خطوط أساس قوية على مجموعتين لاختبار العالم الحقيقي ومجموعة بيانات معيار WMT مع إمكانية الترجمة الترجمة جيدة.سنجعل كودنا ومجموعات البيانات متاحة للجمهور قريبا.
This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with noisy input for robust neural machine translation by introducing self-correcting predictors. Different from previous robust approaches, Secoco enables NMT to explicitly correct noisy inputs and delete specific errors simultaneously with the translation decoding process. Secoco is able to achieve significant improvements over strong baselines on two real-world test sets and a benchmark WMT dataset with good interpretability. We will make our code and dataset publicly available soon.
References used
https://aclanthology.org/
It has been widely recognized that syntax information can help end-to-end neural machine translation (NMT) systems to achieve better translation. In order to integrate dependency information into Transformer based NMT, existing approaches either expl
In supervised learning, a well-trained model should be able to recover ground truth accurately, i.e. the predicted labels are expected to resemble the ground truth labels as much as possible. Inspired by this, we formulate a difficulty criterion base
Neural machine translation (NMT) models are data-driven and require large-scale training corpus. In practical applications, NMT models are usually trained on a general domain corpus and then fine-tuned by continuing training on the in-domain corpus.
Recent research questions the importance of the dot-product self-attention in Transformer models and shows that most attention heads learn simple positional patterns. In this paper, we push further in this research line and propose a novel substitute
Most current neural machine translation models adopt a monotonic decoding order of either left-to-right or right-to-left. In this work, we propose a novel method that breaks up the limitation of these decoding orders, called Smart-Start decoding. Mor