يستكشف هذا العمل قدرات الترجمة الآلية العصبية القائمة على الأحرف لترجمة المحتوى الناتج عن المستخدم الصاخب (UGC) مع التركيز القوي على استكشاف حدود هذه الأساليب للتعامل مع ظواهر UGC الإنتاجية، والتي بحكم تعريفها تقريبا، لا يمكن رؤيتها في وقت التدريبوبعدضمن سيناريو صاروخي صاروخي بالرصاص، نقوم أولا بدراسة التأثير الضار على أداء الترجمة من مختلف ظواهر المحتوى التي تم إنشاؤها من قبل المستخدم على مجموعة بيانات صغيرة مشروحة، وظهرت بعد ذلك أن هذه النماذج غير قادرة بالفعل على التعامل مع الحروف غير المعروفة، مما يؤدي إلى ترجمة كارثيةالفشل بمجرد مواجهة هذه الشخصيات.نحن نؤكد مزيدا من السلوك بتجربة مهمة بسيطة ولكنها ثاقبة وتسليط الضوء على أهمية تقليل حجم المفردات Hyper-Parameter لزيادة متانة النماذج القائمة على الأحرف للترجمة الآلية.
This work explores the capacities of character-based Neural Machine Translation to translate noisy User-Generated Content (UGC) with a strong focus on exploring the limits of such approaches to handle productive UGC phenomena, which almost by definition, cannot be seen at training time. Within a strict zero-shot scenario, we first study the detrimental impact on translation performance of various user-generated content phenomena on a small annotated dataset we developed and then show that such models are indeed incapable of handling unknown letters, which leads to catastrophic translation failure once such characters are encountered. We further confirm this behavior with a simple, yet insightful, copy task experiment and highlight the importance of reducing the vocabulary size hyper-parameter to increase the robustness of character-based models for machine translation.
References used
https://aclanthology.org/
Paraphrases refer to texts that convey the same meaning with different expression forms. Pivot-based methods, also known as the round-trip translation, have shown promising results in generating high-quality paraphrases. However, existing pivot-based
This work takes a critical look at the evaluation of user-generated content automatic translation, the well-known specificities of which raise many challenges for MT. Our analyses show that measuring the average-case performance using a standard metr
Policy gradient algorithms have found wide adoption in NLP, but have recently become subject to criticism, doubting their suitability for NMT. Choshen et al. (2020) identify multiple weaknesses and suspect that their success is determined by the shap
When building machine translation systems, one often needs to make the best out of heterogeneous sets of parallel data in training, and to robustly handle inputs from unexpected domains in testing. This multi-domain scenario has attracted a lot of re
Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements. In this paper, we revisit the neural probabilistic language model (NPLM) of Bengio et al. (200