تتطلب أساليب التعلم المنهج الحالية للترجمة الآلية العصبية (NMT) أخذ العينات مبالغ كافية من العينات "من بيانات التدريب في مرحلة التدريب المبكر. هذا غير قابل للتحقيق دائما لغات الموارد المنخفضة حيث تكون كمية البيانات التدريبية محدودة. لمعالجة مثل هذا القيد، نقترح نقه نهج تعليمي مناهج رواية حكيمة ينشئ كميات كافية من العينات السهلة. على وجه التحديد، يتعلم النموذج التنبؤ بتسلسل فرعي قصير من الجزء التالي من كل جملة مستهدفة في المرحلة المبكرة للتدريب. ثم يتم توسيع التسلسل الفرعي تدريجيا مع تقدم التدريب. مثل هذا التصميم المناهج الدراسي الجديد مستوحى من التأثير التراكمي لأخطاء الترجمة، مما يجعل الرموز الأخيرة أكثر تحديا للتنبؤ أكثر من البداية. تبين تجارب واسعة أن نهجنا يمكن أن تتفوق باستمرار على الأساس على خمسة أزواج لغات، خاصة لغات الموارد المنخفضة. يجمع بين نهجنا مع طرق مستوى الجملة يحسن أداء لغات الموارد العالية.
Existing curriculum learning approaches to Neural Machine Translation (NMT) require sampling sufficient amounts of easy'' samples from training data at the early training stage. This is not always achievable for low-resource languages where the amount of training data is limited. To address such a limitation, we propose a novel token-wise curriculum learning approach that creates sufficient amounts of easy samples. Specifically, the model learns to predict a short sub-sequence from the beginning part of each target sentence at the early stage of training. Then the sub-sequence is gradually expanded as the training progresses. Such a new curriculum design is inspired by the cumulative effect of translation errors, which makes the latter tokens more challenging to predict than the beginning ones. Extensive experiments show that our approach can consistently outperform baselines on five language pairs, especially for low-resource languages. Combining our approach with sentence-level methods further improves the performance of high-resource languages.
References used
https://aclanthology.org/
Low-resource Multilingual Neural Machine Translation (MNMT) is typically tasked with improving the translation performance on one or more language pairs with the aid of high-resource language pairs. In this paper and we propose two simple search base
Back-translation (BT) has become one of the de facto components in unsupervised neural machine translation (UNMT), and it explicitly makes UNMT have translation ability. However, all the pseudo bi-texts generated by BT are treated equally as clean da
Currently, multilingual machine translation is receiving more and more attention since it brings better performance for low resource languages (LRLs) and saves more space. However, existing multilingual machine translation models face a severe challe
Neural machine translation (NMT) models are data-driven and require large-scale training corpus. In practical applications, NMT models are usually trained on a general domain corpus and then fine-tuned by continuing training on the in-domain corpus.
In supervised learning, a well-trained model should be able to recover ground truth accurately, i.e. the predicted labels are expected to resemble the ground truth labels as much as possible. Inspired by this, we formulate a difficulty criterion base