أنظمة الترجمة الآلية عرضة لمواطيات المجال، خاصة في سيناريو منخفض الموارد.غالبا ما تكون ترجمات خارج النطاق ذات جودة رديئة وعرضة للهلوسة، بسبب تحيز التعرض والكشف بمثابة نموذج لغة.نعتمد نهجين لتخفيف هذه المشكلة: القائمة المختصرة المعجمية مقيدة بمحاذاة إيماء IBM، وفرض الفرضية القائمة على التشابه.الأساليب هي رخيصة حسابية وتظهر النجاح على مجموعات اختبار الموارد المنخفضة من الموارد.ومع ذلك، فإن الطرق تفقد ميزة عند وجود بيانات كافية أو عدم تطابق مجال كبير جدا.يرجع ذلك إلى كل من نموذج IBM يفقد ميزته على المحاذاة العصبية المستفادة ضمنيا، وقضايا تجزئة الكلمات الفرعية للكلمات غير المرئية.
Machine translation systems are vulnerable to domain mismatch, especially in a low-resource scenario. Out-of-domain translations are often of poor quality and prone to hallucinations, due to exposure bias and the decoder acting as a language model. We adopt two approaches to alleviate this problem: lexical shortlisting restricted by IBM statistical alignments, and hypothesis reranking based on similarity. The methods are computationally cheap and show success on low-resource out-of-domain test sets. However, the methods lose advantage when there is sufficient data or too great domain mismatch. This is due to both the IBM model losing its advantage over the implicitly learned neural alignment, and issues with subword segmentation of unseen words.
References used
https://aclanthology.org/
We study the problem of domain adaptation in Neural Machine Translation (NMT) when domain-specific data cannot be shared due to confidentiality or copyright issues. As a first step, we propose to fragment data into phrase pairs and use a random sampl
Domain Adaptation is widely used in practical applications of neural machine translation, which aims to achieve good performance on both general domain and in-domain data. However, the existing methods for domain adaptation usually suffer from catast
Successful methods for unsupervised neural machine translation (UNMT) employ cross-lingual pretraining via self-supervision, often in the form of a masked language modeling or a sequence generation task, which requires the model to align the lexical-
Production NMT systems typically need to serve niche domains that are not covered by adequately large and readily available parallel corpora. As a result, practitioners often fine-tune general purpose models to each of the domains their organisation
Recently a number of approaches have been proposed to improve translation performance for document-level neural machine translation (NMT). However, few are focusing on the subject of lexical translation consistency. In this paper we apply one transla