غالبا ما يقتصر الترجمة الآلية العصبية لغات الموارد المنخفضة (LRL) على عدم وجود بيانات تدريبية متاحة، مما يجعل من الضروري استكشاف تقنيات إضافية لتحسين جودة الترجمة.نقترح استخدام خوارزمية تجزئة الكلمات الفرعية للترميز (PRPE) بادئة الجذر (PRPE) لتحسين جودة الترجمة ل LRLS، باستخدام لغتين تغليف كدراسات حالة: Quechua والإندونيسية.أثناء تجاربنا، نعيد إدخال كوربوس موازية لترجمة Quechua-Spanish التي كانت غير متوفرة سابقا ل NMT.تظهر تجاربنا أهمية تجزئة الكلمات الفرعية المناسبة، والتي يمكن أن تذهب بقدر تحسين جودة الترجمة عبر الأنظمة المدربة على كميات أكبر بكثير من البيانات.نظهر هذا من خلال تحقيق نتائج حديثة لكلتا اللغتين، والحصول على درجات بلو أعلى من النماذج الكبيرة المدربة مسبقا مع كميات أقل بكثير من البيانات.
Neural Machine Translation (NMT) for Low Resource Languages (LRL) is often limited by the lack of available training data, making it necessary to explore additional techniques to improve translation quality. We propose the use of the Prefix-Root-Postfix-Encoding (PRPE) subword segmentation algorithm to improve translation quality for LRLs, using two agglutinative languages as case studies: Quechua and Indonesian. During the course of our experiments, we reintroduce a parallel corpus for Quechua-Spanish translation that was previously unavailable for NMT. Our experiments show the importance of appropriate subword segmentation, which can go as far as improving translation quality over systems trained on much larger quantities of data. We show this by achieving state-of-the-art results for both languages, obtaining higher BLEU scores than large pre-trained models with much smaller amounts of data.
References used
https://aclanthology.org/
Low-resource languages sometimes take on similar morphological and syntactic characteristics due to their geographic nearness and shared history. Two low-resource neighboring languages found in Peru, Quechua and Ashaninka, can be considered, at first
We translate a closed text that is known in advance and available in many languages into a new and severely low resource language. Most human translation efforts adopt a portionbased approach to translate consecutive pages/chapters in order, which ma
In this work, we investigate methods for the challenging task of translating between low- resource language pairs that exhibit some level of similarity. In particular, we consider the utility of transfer learning for translating between several Indo-
Dravidian languages, such as Kannada and Tamil, are notoriously difficult to translate by state-of-the-art neural models. This stems from the fact that these languages are morphologically very rich as well as being low-resourced. In this paper, we fo
This paper describes TenTrans' submission to WMT21 Multilingual Low-Resource Translation shared task for the Romance language pairs. This task focuses on improving translation quality from Catalan to Occitan, Romanian and Italian, with the assistance