عادة ما يتم تكليف الترجمة الآلية العصبية متعددة الموارد (MNMT) بتحسين أداء الترجمة على أزواج لغة واحدة أو أكثر بمساعدة أزواج لغة الموارد عالية الموارد.في هذه الورقة، نقترح اثنين من المناهج البحث البسيطة القائمة على البحث - طلب بيانات التدريب المتعدد اللغات - والتي تساعد على تحسين أداء الترجمة بالاقتران مع التقنيات الحالية مثل الضبط الدقيق.بالإضافة إلى ذلك، نحاول تعلم منهجا من المناهج الدراسية من MNMT من الصفر بالاشتراك مع تدريب نظام الترجمة باستخدام قطاع الطرق متعددة الذراع السياقية.نعرض على مجموعة بيانات الترجمة المنخفضة من Flores التي يمكن أن توفر هذه المناهج المستفادة نقاطا أفضل للضبط وتحسين الأداء العام لنظام الترجمة.
Low-resource Multilingual Neural Machine Translation (MNMT) is typically tasked with improving the translation performance on one or more language pairs with the aid of high-resource language pairs. In this paper and we propose two simple search based curricula -- orderings of the multilingual training data -- which help improve translation performance in conjunction with existing techniques such as fine-tuning. Additionally and we attempt to learn a curriculum for MNMT from scratch jointly with the training of the translation system using contextual multi-arm bandits. We show on the FLORES low-resource translation dataset that these learned curricula can provide better starting points for fine tuning and improve overall performance of the translation system.
References used
https://aclanthology.org/
Production NMT systems typically need to serve niche domains that are not covered by adequately large and readily available parallel corpora. As a result, practitioners often fine-tune general purpose models to each of the domains their organisation
Existing curriculum learning approaches to Neural Machine Translation (NMT) require sampling sufficient amounts of easy'' samples from training data at the early training stage. This is not always achievable for low-resource languages where the amoun
Currently, multilingual machine translation is receiving more and more attention since it brings better performance for low resource languages (LRLs) and saves more space. However, existing multilingual machine translation models face a severe challe
Learning multilingual and multi-domain translation model is challenging as the heterogeneous and imbalanced data make the model converge inconsistently over different corpora in real world. One common practice is to adjust the share of each corpus in
Neural machine translation based on bilingual text with limited training data suffers from lexical diversity, which lowers the rare word translation accuracy and reduces the generalizability of the translation system. In this work, we utilise the mul