طرق ناجحة للترجمة الآلية العصبية غير المنشأة (UNMT) توظف الاحتجاج عبر اللغات عبر الإشراف الذاتي، في كثير من الأحيان في شكل نمذجة لغة ملمقة أو مهمة توليد التسلسل، والتي تتطلب نموذج محاذاة التمثيلات المعجمية والفوضيةاللغتين.بينما يعمل الاحتجاج عبر اللغات اللغوي لغات مماثلة مع كوربورا وفيرة، فإنه يؤدي بشكل سيئ في اللغات المنخفضة والبستية.أظهرت الأبحاث السابقة أن هذا هو أن التمثيلات غير محاذاة بما فيه الكفاية.في هذه الورقة، نعزز نموذج اللغة الملثملة ثنائية اللغة بإحاطا بمعلومات على المستوى المعجمي باستخدام تضيير الكلمات الفرعية عبر مستوى المستوى.توضح النتائج التجريبية الأداء المحسن على حد سواء على نظام التعمير (ما يصل إلى 4.5 بلو) وتحليل المعجم الثنائي اللغة باستخدام طريقتنا مقارنة بناس خط الأساس.
Successful methods for unsupervised neural machine translation (UNMT) employ cross-lingual pretraining via self-supervision, often in the form of a masked language modeling or a sequence generation task, which requires the model to align the lexical- and high-level representations of the two languages. While cross-lingual pretraining works for similar languages with abundant corpora, it performs poorly in low-resource and distant languages. Previous research has shown that this is because the representations are not sufficiently aligned. In this paper, we enhance the bilingual masked language model pretraining with lexical-level information by using type-level cross-lingual subword embeddings. Empirical results demonstrate improved performance both on UNMT (up to 4.5 BLEU) and bilingual lexicon induction using our method compared to a UNMT baseline.
References used
https://aclanthology.org/
Paraphrase generation has benefited extensively from recent progress in the designing of training objectives and model architectures. However, previous explorations have largely focused on supervised methods, which require a large amount of labeled d
Abstract Consistency of a model---that is, the invariance of its behavior under meaning-preserving alternations in its input---is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language
Machine translation systems are vulnerable to domain mismatch, especially in a low-resource scenario. Out-of-domain translations are often of poor quality and prone to hallucinations, due to exposure bias and the decoder acting as a language model. W
Existing work on probing of pretrained language models (LMs) has predominantly focused on sentence-level syntactic tasks. In this paper, we introduce document-level discourse probing to evaluate the ability of pretrained LMs to capture document-level
Recently a number of approaches have been proposed to improve translation performance for document-level neural machine translation (NMT). However, few are focusing on the subject of lexical translation consistency. In this paper we apply one transla