تعتمد الترجمة الآلية عادة على Corpora الموازي لتوفير إشارات متوازية للتدريب.جلبت ظهور الترجمة الآلية غير المنشورة ترجمة آلة بعيدا عن هذا الاعتماد، على الرغم من أن الأداء لا يزال يتخلف عن الترجمة التقليدية للإشراف الآلية.في الترجمة الآلية غير المنشورة، يسعى النموذج إلى أوجه تشابه لغة متماثلة كمصدر للإشارة الموازية الضعيفة لتحقيق الترجمة.إن نظرية تشومسكي العالمي النجمية تفترض أن القواعد هي شكل فطري من المعرفة للبشر ويحكمها المبادئ والقيود العالمية.لذلك، في هذه الورقة، نسعى إلى الاستفادة من هذه الأدلة القواعد المشتركة لتوفير إشارات متوازية لغة أكثر صراحة لتعزيز تدريب نماذج الترجمة الآلية غير المنشورة.من خلال تجارب على أزواج لغة متعددة النموذجية، نوضح فعالية مناهجنا المقترحة.
Machine translation usually relies on parallel corpora to provide parallel signals for training. The advent of unsupervised machine translation has brought machine translation away from this reliance, though performance still lags behind traditional supervised machine translation. In unsupervised machine translation, the model seeks symmetric language similarities as a source of weak parallel signal to achieve translation. Chomsky's Universal Grammar theory postulates that grammar is an innate form of knowledge to humans and is governed by universal principles and constraints. Therefore, in this paper, we seek to leverage such shared grammar clues to provide more explicit language parallel signals to enhance the training of unsupervised machine translation models. Through experiments on multiple typical language pairs, we demonstrate the effectiveness of our proposed approaches.
References used
https://aclanthology.org/
Non-autoregressive neural machine translation, which decomposes the dependence on previous target tokens from the inputs of the decoder, has achieved impressive inference speedup but at the cost of inferior accuracy. Previous works employ iterative d
Recently a number of approaches have been proposed to improve translation performance for document-level neural machine translation (NMT). However, few are focusing on the subject of lexical translation consistency. In this paper we apply one transla
Scheduled sampling is widely used to mitigate the exposure bias problem for neural machine translation. Its core motivation is to simulate the inference scene during training by replacing ground-truth tokens with predicted tokens, thus bridging the g
Back-translation (BT) has become one of the de facto components in unsupervised neural machine translation (UNMT), and it explicitly makes UNMT have translation ability. However, all the pseudo bi-texts generated by BT are treated equally as clean da
The paper presents experiments in neural machine translation with lexical constraints into a morphologically rich language. In particular and we introduce a method and based on constrained decoding and which handles the inflected forms of lexical ent