بالنسبة لمعظم مجموعات اللغة والبيانات الموازية إما نادرة أو غير متوفرة ببساطة.لمعالجة هذا والترجمة الآلية غير المرفوعة (UMT) باستغلال كميات كبيرة من البيانات الأحادية من خلال استخدام تقنيات توليد البيانات الاصطناعية مثل الترجمة الخلفية والتوزيع وبينما يحدد NMT (SSNMT) بشكل مرئي جمل متوازية في بيانات وقابلة للمقارنة أصغر.لهذا التاريخ، لم يتم التحقيق في تقنيات توليد بيانات UMT في SSNMT.نظهر أنه بما في ذلك تقنيات UMT في SSNMT تتفوق بشكل كبير SSNMT (يصل إلى +4.3 بلو و AF2EN) بالإضافة إلى خطوط خطوط إحصائية (+50.8 بلو) و Sybrid UMT (+51.5 بلو) على أزواج لغة ذات صلة وغير ذات صلة وغير ذات صلة.
For most language combinations and parallel data is either scarce or simply unavailable. To address this and unsupervised machine translation (UMT) exploits large amounts of monolingual data by using synthetic data generation techniques such as back-translation and noising and while self-supervised NMT (SSNMT) identifies parallel sentences in smaller comparable data and trains on them. To this date and the inclusion of UMT data generation techniques in SSNMT has not been investigated. We show that including UMT techniques into SSNMT significantly outperforms SSNMT (up to +4.3 BLEU and af2en) as well as statistical (+50.8 BLEU) and hybrid UMT (+51.5 BLEU) baselines on related and distantly-related and unrelated language pairs.
References used
https://aclanthology.org/
In this work, we investigate methods for the challenging task of translating between low- resource language pairs that exhibit some level of similarity. In particular, we consider the utility of transfer learning for translating between several Indo-
Dravidian languages, such as Kannada and Tamil, are notoriously difficult to translate by state-of-the-art neural models. This stems from the fact that these languages are morphologically very rich as well as being low-resourced. In this paper, we fo
This paper describes the participation of team oneNLP (LTRC, IIIT-Hyderabad) for the WMT 2021 task, similar language translation. We experimented with transformer based Neural Machine Translation and explored the use of language similarity for Tamil-
In simultaneous machine translation, finding an agent with the optimal action sequence of reads and writes that maintain a high level of translation quality while minimizing the average lag in producing target tokens remains an extremely challenging
In this paper and we explore different techniques of overcoming the challenges of low-resource in Neural Machine Translation (NMT) and specifically focusing on the case of English-Marathi NMT. NMT systems require a large amount of parallel corpora to