تهدف الترجمة الآلية المتنوعة إلى توليد ترجمات لغة مستهدفة مختلفة عن عقوبة لغة مصدر معينة. للاستفادة من العلاقة الخطية في مجال الجملة الكامنة التي أدخلت من خلال التدريب المزيج، نقترح طريقة رواية، خلطتيبات، لتوليد ترجمات مختلفة عن جملة الإدخال من خلال الاسترجاء الخطي مع أزواج من الجملة المختلفة من كوربوس التدريب أثناء فك التشفير. لزيادة تحسين الإخلاص وتنوع الترجمات، نقترح مقاربتين بسيطة ولكنها فعالة لتحديد أزواج جملة متنوعة في كوربوس التدريب وضبط وزن الاستيفاء لكل زوج في المقابل. علاوة على ذلك، من خلال التحكم في وزن الاستيفاء، يمكن لطريقتنا تحقيق المفاضلة بين الإخلاص والتنوع دون أي تدريب إضافي، وهو مطلوب في معظم الأساليب السابقة. تتم تجارب WMT'16 EN-RO، WMT'14 EN-DE، و WMT'17 Zh-en لإظهار أن طريقتنا تتفوق بشكل كبير على جميع أساليب الترجمة الآلية المتنوعة السابقة.
Diverse machine translation aims at generating various target language translations for a given source language sentence. To leverage the linear relationship in the sentence latent space introduced by the mixup training, we propose a novel method, MixDiversity, to generate different translations for the input sentence by linearly interpolating it with different sentence pairs sampled from the training corpus during decoding. To further improve the faithfulness and diversity of the translations, we propose two simple but effective approaches to select diverse sentence pairs in the training corpus and adjust the interpolation weight for each pair correspondingly. Moreover, by controlling the interpolation weight, our method can achieve the trade-off between faithfulness and diversity without any additional training, which is required in most of the previous methods. Experiments on WMT'16 en-ro, WMT'14 en-de, and WMT'17 zh-en are conducted to show that our method substantially outperforms all previous diverse machine translation methods.
References used
https://aclanthology.org/
Most current neural machine translation models adopt a monotonic decoding order of either left-to-right or right-to-left. In this work, we propose a novel method that breaks up the limitation of these decoding orders, called Smart-Start decoding. Mor
Scheduled sampling is widely used to mitigate the exposure bias problem for neural machine translation. Its core motivation is to simulate the inference scene during training by replacing ground-truth tokens with predicted tokens, thus bridging the g
The shift to neural models in Referring Expression Generation (REG) has enabled more natural set-ups, but at the cost of interpretability. We argue that integrating pragmatic reasoning into the inference of context-agnostic generation models could re
The dominant paradigm for semantic parsing in recent years is to formulate parsing as a sequence-to-sequence task, generating predictions with auto-regressive sequence decoders. In this work, we explore an alternative paradigm. We formulate semantic
Encoder-decoder models have been commonly used for many tasks such as machine translation and response generation. As previous research reported, these models suffer from generating redundant repetition. In this research, we propose a new mechanism f