تصف هذه الورقة نظام Noahnmt المقدم إلى المهمة المشتركة WMT 2021 الخاصة بترجمة آلية منخفضة للغاية للإشراف على الموارد.النظام هو نموذج محول قياسي مزود بتقنية نقلنا الحديثة.كما توظف التقنيات المستخدمة على نطاق واسع من المعروف أنها مفيدة للترجمة الآلية العصبية، بما في ذلك الترجمة الترجمة الإلكترونية التكرارية، والصلفة المختارة، والوقت.يقدم التقديم النهائي أعلى بلو لثلاثة اتجاهات ترجمة.
This paper describes the NoahNMT system submitted to the WMT 2021 shared task of Very Low Resource Supervised Machine Translation. The system is a standard Transformer model equipped with our recent technique of dual transfer. It also employs widely used techniques that are known to be helpful for neural machine translation, including iterative back-translation, selected finetuning, and ensemble. The final submission achieves the top BLEU for three translation directions.
References used
https://aclanthology.org/
We present the findings of the WMT2021 Shared Tasks in Unsupervised MT and Very Low Resource Supervised MT. Within the task, the community studied very low resource translation between German and Upper Sorbian, unsupervised translation between German
In this paper, we present the systems submitted by our team from the Institute of ICT (HEIG-VD / HES-SO) to the Unsupervised MT and Very Low Resource Supervised MT task. We first study the improvements brought to a baseline system by techniques such
This paper describes the submission to the IWSLT 2021 Low-Resource Speech Translation Shared Task by IMS team. We utilize state-of-the-art models combined with several data augmentation, multi-task and transfer learning approaches for the automatic s
This paper describes Lingua Custodia's submission to the WMT21 shared task on machine translation using terminologies. We consider three directions, namely English to French, Russian, and Chinese. We rely on a Transformer-based architecture as a buil
For most language combinations and parallel data is either scarce or simply unavailable. To address this and unsupervised machine translation (UMT) exploits large amounts of monolingual data by using synthetic data generation techniques such as back-