تصف هذه الورقة تقديم LIT-NLP LAB إلى المهمة المشتركة للترجمة الثلاثي WMT-21 Triangular.لا يسمح للمشاركين باستخدام البيانات الأخرى واتجاه الترجمة لهذه المهمة هو الروسية إلى الصينية.في هذه المهمة، نستخدم المحول كنموذج الأساس لدينا، ودمج العديد من التقنيات لتعزيز أداء الأساس، بما في ذلك تصفية البيانات، واختيار البيانات، والضبط الناعم، والتحرير بعد التحرير.علاوة على ذلك، للاستفادة من موارد اللغة الإنجليزية، مثل البيانات الروسية / الإنجليزية والصينية / الإنجليزية الموازية، يتم إنشاء مثلث العلاقة من خلال أنظمة الترجمة الآلية العصبية متعددة اللغات.نتيجة لذلك، يحقق تقديمنا نقاطا بلو 21.9 في الروسية إلى الصينية.
This paper describes DUT-NLP Lab's submission to the WMT-21 triangular machine translation shared task. The participants are not allowed to use other data and the translation direction of this task is Russian-to-Chinese. In this task, we use the Transformer as our baseline model, and integrate several techniques to enhance the performance of the baseline, including data filtering, data selection, fine-tuning, and post-editing. Further, to make use of the English resources, such as Russian/English and Chinese/English parallel data, the relationship triangle is constructed by multilingual neural machine translation systems. As a result, our submission achieves a BLEU score of 21.9 in Russian-to-Chinese.
References used
https://aclanthology.org/
This paper describes Mininglamp neural machine translation systems of the WMT2021 news translation tasks. We have participated in eight directions translation tasks for news text including Chinese to/from English, Hausa to/from English, German to/fro
This paper describes Tencent Translation systems for the WMT21 shared task. We participate in the news translation task on three language pairs: Chinese-English, English-Chinese and German-English. Our systems are built on various Transformer models
This paper describes the ISTIC's submission to the Triangular Machine Translation Task of Russian-to-Chinese machine translation for WMT' 2021. In order to fully utilize the provided corpora and promote the translation performance from Russian to Chi
In this paper, we describe our submissions for the Similar Language Translation Shared Task 2021. We built 3 systems in each direction for the Tamil ⇐⇒ Telugu language pair. This paper outlines experiments with various tokenization schemes to train statistical models. We also report the configuration of the submitted systems and results produced by them.
This paper describes the Tencent AI Lab submission of the WMT2021 shared task on biomedical translation in eight language directions: English-German, English-French, English-Spanish and English-Russian. We utilized different Transformer architectures