اكتسبت نهج الترجمة الآلية العصبية شعبية في الترجمة الآلية بسبب تحليل سياقها وقدرتها ومعالجتها لقضايا الاعتماد على المدى الطويل.لقد شاركنا في المهمة المشتركة WMT21 الخاصة بترجمة اللغة المماثلة على زوج التاميل التيلجو مع اسم الفريق: NILP-NITS.في هذه المهمة، استغلنا بيانات أحادية الأونلينغ عن طريق تضيير Word مسبقا في Transformer Model Necural Translation للتعامل مع قيود Corpus الموازية.لقد حقق نموذجنا تقييم ثنائي اللغة التقييم (بلو) 0.05، درجة التقييم البديهات بديهية سهلة اللغة (RIBES) في المرتبة (RIBES) من 24.80 ونتيجة معدل تحرير الترجمة من 97.24 لكل من Tamil-to-Telugu و Teluguترجمات التاميل على التوالي.
The neural machine translation approach has gained popularity in machine translation because of its context analysing ability and its handling of long-term dependency issues. We have participated in the WMT21 shared task of similar language translation on a Tamil-Telugu pair with the team name: CNLP-NITS. In this task, we utilized monolingual data via pre-train word embeddings in transformer model based neural machine translation to tackle the limitation of parallel corpus. Our model has achieved a bilingual evaluation understudy (BLEU) score of 4.05, rank-based intuitive bilingual evaluation score (RIBES) score of 24.80 and translation edit rate (TER) score of 97.24 for both Tamil-to-Telugu and Telugu-to-Tamil translations respectively.
References used
https://aclanthology.org/
This paper describes the participation of team oneNLP (LTRC, IIIT-Hyderabad) for the WMT 2021 task, similar language translation. We experimented with transformer based Neural Machine Translation and explored the use of language similarity for Tamil-
In this paper and we explore different techniques of overcoming the challenges of low-resource in Neural Machine Translation (NMT) and specifically focusing on the case of English-Marathi NMT. NMT systems require a large amount of parallel corpora to
In this paper, we present a novel approachfor domain adaptation in Neural MachineTranslation which aims to improve thetranslation quality over a new domain.Adapting new domains is a highly challeng-ing task for Neural Machine Translation onlimited da
In this work, two Neural Machine Translation (NMT) systems have been developed and evaluated as part of the bidirectional Tamil-Telugu similar languages translation subtask in WMT21. The OpenNMT-py toolkit has been used to create quick prototypes of
Many NLP models operate over sequences of subword tokens produced by hand-crafted tokenization rules and heuristic subword induction algorithms. A simple universal alternative is to represent every computerized text as a sequence of bytes via UTF-8,