حققت الترجمة الآلية العصبية (NMT) طفرة كبيرة في الأداء ولكن من المعروف أن تعاني من الضعف للاضطرابات الإدخال.نظرا لأن ضوضاء المدخلات الحقيقية يصعب التنبؤ أثناء التدريب، فإن القوية هي مشكلة كبيرة لنشر النظام.في هذه الورقة، نحسن متانة نماذج NMT عن طريق تقليل تأثير الكلمات الصاخبة من خلال نهج إعادة الإعمار المحسن في السياق (CER).CER يدرب النموذج لمقاومة الضوضاء في خطوتين: (1) خطوة اضطراب يكسر طبيعية تسلسل الإدخال بكلمات مكونة؛(2) خطوة إعادة الإعمار التي تدافع عن انتشار الضوضاء من خلال توليد تمثيل سياقي أفضل وأكثر قوة.توضح نتائج تجريبية على مهام الترجمة الصينية والإنجليزية (ZH-en) والفرنسية-الإنجليزية (FR-EN) تحسين متانة على كل من الأخبار ونص وسائل التواصل الاجتماعي.وإظهار المزيد من التجارب الدقيقة على نص وسائل التواصل الاجتماعي أن نهجنا يمكن أن يتجاوز موقفا أعلى ويوفر تكيفا أفضل.
Neural Machine Translation (NMT) has achieved significant breakthrough in performance but is known to suffer vulnerability to input perturbations. As real input noise is difficult to predict during training, robustness is a big issue for system deployment. In this paper, we improve the robustness of NMT models by reducing the effect of noisy words through a Context-Enhanced Reconstruction (CER) approach. CER trains the model to resist noise in two steps: (1) perturbation step that breaks the naturalness of input sequence with made-up words; (2) reconstruction step that defends the noise propagation by generating better and more robust contextual representation. Experimental results on Chinese-English (ZH-EN) and French-English (FR-EN) translation tasks demonstrate robustness improvement on both news and social media text. Further fine-tuning experiments on social media text show our approach can converge at a higher position and provide a better adaptation.
References used
https://aclanthology.org/
Despite the increasing number of large and comprehensive machine translation (MT) systems, evaluation of these methods in various languages has been restrained by the lack of high-quality parallel corpora as well as engagement with the people that sp
High-performance neural language models have obtained state-of-the-art results on a wide range of Natural Language Processing (NLP) tasks. However, results for common benchmark datasets often do not reflect model reliability and robustness when appli
In question generation, the question produced has to be well-formed and meaningfully related to the answer serving as input. Neural generation methods have predominantly leveraged the distributional semantics of words as representations of meaning an
Performance of NMT systems has been proven to depend on the quality of the training data. In this paper we explore different open-source tools that can be used to score the quality of translation pairs, with the goal of obtaining clean corpora for tr
The explosion of user-generated content (UGC)---e.g. social media posts and comments and and reviews---has motivated the development of NLP applications tailored to these types of informal texts. Prevalent among these applications have been sentiment