تصف هذه الورقة مساهمة Sebamat بمهمة مشتركة بين 2021 WMT Translation.باستخدام مجموعة أدوات الترجمة الآلية العصبية Marian، تم بناء أنظمة الترجمة القائمة على بنية محولات Google في كلا الاتجاهين في الكاتالونية - الإسبانية والبرتغالية - الإسبانية.تم تدريب الأنظمة على اثنين من إعدادات المعلمة مناقصة (أحجام مختلفة لمفردات لترميز زوج البايت) باستخدام العرض الموازي ولكن ليس فقط الشركة المقارنة منظمي المهام المشتركة.وفقا لنتائج التقييم الرسمية الخاصة بهم، تبين أن نظام Sebamat تنافسي مع الترتيب بين أفضل الفرق وعشرات بلو بين 38 و 47 لأزواج اللغة التي تنطوي على البرتغالية وبين 76 و 80 لأزواج اللغة التي تنطوي على الكاتالونية.
This paper describes the SEBAMAT contribution to the 2021 WMT Similar Language Translation shared task. Using the Marian neural machine translation toolkit, translation systems based on Google's transformer architecture were built in both directions of Catalan--Spanish and Portuguese--Spanish. The systems were trained in two contrastive parameter settings (different vocabulary sizes for byte pair encoding) using only the parallel but not the comparable corpora provided by the shared task organizers. According to their official evaluation results, the SEBAMAT system turned out to be competitive with rankings among the top teams and BLEU scores between 38 and 47 for the language pairs involving Portuguese and between 76 and 80 for the language pairs involving Catalan.
References used
https://aclanthology.org/
The main idea of this solution has been to focus on corpus cleaning and preparation and after that, use an out of box solution (OpenNMT) with its default published transformer model. To prepare the corpus, we have used set of standard tools (as Moses
We investigate transfer learning based on pre-trained neural machine translation models to translate between (low-resource) similar languages. This work is part of our contribution to the WMT 2021 Similar Languages Translation Shared Task where we su
Adapter layers are lightweight, learnable units inserted between transformer layers. Recent work explores using such layers for neural machine translation (NMT), to adapt pre-trained models to new domains or language pairs, training only a small set
The explosion of user-generated content (UGC)---e.g. social media posts and comments and and reviews---has motivated the development of NLP applications tailored to these types of informal texts. Prevalent among these applications have been sentiment
Sentence weighting is a simple and powerful domain adaptation technique. We carry out domain classification for computing sentence weights with 1) language model cross entropy difference 2) a convolutional neural network 3) a Recursive Neural Tensor