تحدد محاذاة Word المراسلات المتعلقة بالمراسلات بين الكلمات في زوج جملة متوازية وتستخدم ومثالا وتدريب الترجمة ذات الجهاز الإحصائي وتعلم قواميس ثنائية اللغة أو لأداء تقدير الجودة.أصبح Totkenization في الكلمات الفرعية خطوة مسبقة مسبق لمعاييرها لعدد كبير من التطبيقات وخاصة أنظمة الترجمة الآلية المفتوحة لمفردات الأحدث.في هذه الورقة، ندرس تماما كيف تتفاعل هذه الخطوة المعالجة مسبقا مع مهمة محاذاة الكلمة واقتراح عدة استراتيجيات التكوين للحصول على كورسا موازية مجزأة جيدا.باستخدام هذه التقنيات الجديدة وتمكنا من تحسين نماذج المحاذاة القائمة على الكلمات الأساسية لستة أزواج لغوية.
Word alignment identify translational correspondences between words in a parallel sentence pair and are used and for example and to train statistical machine translation and learn bilingual dictionaries or to perform quality estimation. Subword tokenization has become a standard preprocessing step for a large number of applications and notably for state-of-the-art open vocabulary machine translation systems. In this paper and we thoroughly study how this preprocessing step interacts with the word alignment task and propose several tokenization strategies to obtain well-segmented parallel corpora. Using these new techniques and we were able to improve baseline word-based alignment models for six language pairs.
References used
https://aclanthology.org/
Multilingual pretrained representations generally rely on subword segmentation algorithms to create a shared multilingual vocabulary. However, standard heuristic algorithms often lead to sub-optimal segmentation, especially for languages with limited
Byte-pair encoding (BPE) is a ubiquitous algorithm in the subword tokenization process of language models as it provides multiple benefits. However, this process is solely based on pre-training data statistics, making it hard for the tokenizer to han
Being able to generate accurate word alignments is useful for a variety of tasks. While statistical word aligners can work well, especially when parallel training data are plentiful, multilingual embedding models have recently been shown to give good
Data-driven subword segmentation has become the default strategy for open-vocabulary machine translation and other NLP tasks, but may not be sufficiently generic for optimal learning of non-concatenative morphology. We design a test suite to evaluate
Latent alignment objectives such as CTC and AXE significantly improve non-autoregressive machine translation models. Can they improve autoregressive models as well? We explore the possibility of training autoregressive machine translation models with