من الصعب للغاية ترجمة لغات Dravidian، مثل Kannada و Tamil، على ترجمة النماذج العصبية الحديثة.ينبع هذا من حقيقة أن هذه اللغات غنية بالمثل للغاية بالإضافة إلى توفير الموارد منخفضة الموارد.في هذه الورقة، نركز على تجزئة الكلمات الفرعية وتقييم الحد من المفردات الدوافع اللغوية (LMVR) مقابل الجملة الأكثر استخداما (SP) لمهمة الترجمة من اللغة الإنجليزية إلى أربعة لغات Dravidian مختلفة.بالإضافة إلى ذلك، نحقق في حجم المفردات الفرعية المثلى لكل لغة.نجد أن SP هو الخيار الأكثر شمولا للتجزئة، وأن أحجام القاموس الأكبر تؤدي إلى جودة الترجمة الأعلى.
Dravidian languages, such as Kannada and Tamil, are notoriously difficult to translate by state-of-the-art neural models. This stems from the fact that these languages are morphologically very rich as well as being low-resourced. In this paper, we focus on subword segmentation and evaluate Linguistically Motivated Vocabulary Reduction (LMVR) against the more commonly used SentencePiece (SP) for the task of translating from English into four different Dravidian languages. Additionally we investigate the optimal subword vocabulary size for each language. We find that SP is the overall best choice for segmentation, and that larger dictionary sizes lead to higher translation quality.
References used
https://aclanthology.org/
For most language combinations and parallel data is either scarce or simply unavailable. To address this and unsupervised machine translation (UMT) exploits large amounts of monolingual data by using synthetic data generation techniques such as back-
The quality of the translations generated by Machine Translation (MT) systems has highly improved through the years and but we are still far away to obtain fully automatic high-quality translations. To generate them and translators make use of Comput
The neural machine translation approach has gained popularity in machine translation because of its context analysing ability and its handling of long-term dependency issues. We have participated in the WMT21 shared task of similar language translati
Many NLP models operate over sequences of subword tokens produced by hand-crafted tokenization rules and heuristic subword induction algorithms. A simple universal alternative is to represent every computerized text as a sequence of bytes via UTF-8,
This paper describes the Global Tone Communication Co., Ltd.'s submission of the WMT21 shared news translation task. We participate in six directions: English to/from Hausa, Hindi to/from Bengali and Zulu to/from Xhosa. Our submitted systems are unco