أثارت نماذج اللغة المدربة مسبقا مقرها الانتباه مثل GPT-2 تقدما كبيرا لنمذجة حوار نهاية إلى نهاية.ومع ذلك، فإنهم يقدمون أيضا مخاطر كبيرة للحوار الموجهة إلى المهام، مثل عدم وجود أسس المعرفة أو التنوع.لمعالجة هذه القضايا، نقدم أهداف تدريبية معدلة لنموذج اللغة Finetuning، ونحن نوظف تكبير بيانات ضخمة عبر الترجمة الخلفي لزيادة تنوع بيانات التدريب.ندرس إمكانيات الجمع بين البيانات من مصادر مضاعفات تحسين الأداء على مجموعة البيانات المستهدفة.نحن نقيم بعناية مساهماتنا مع كل من الأساليب البشرية والآلية.يتفوق نموذجنا بشكل كبير على خط الأساس على بيانات MultiWoz ويظهر أداء تنافسي مع حالة الفن في كل من التقييم التلقائي والإنساني.
Attention-based pre-trained language models such as GPT-2 brought considerable progress to end-to-end dialogue modelling. However, they also present considerable risks for task-oriented dialogue, such as lack of knowledge grounding or diversity. To address these issues, we introduce modified training objectives for language model finetuning, and we employ massive data augmentation via back-translation to increase the diversity of the training data. We further examine the possibilities of combining data from multiples sources to improve performance on the target dataset. We carefully evaluate our contributions with both human and automatic methods. Our model substantially outperforms the baseline on the MultiWOZ data and shows competitive performance with state of the art in both automatic and human evaluation.
References used
https://aclanthology.org/
End-to-end approaches for sequence tasks are becoming increasingly popular. Yet for complex sequence tasks, like speech translation, systems that cascade several models trained on sub-tasks have shown to be superior, suggesting that the compositional
This tutorial surveys the latest technical progress of syntactic parsing and the role of syntax in end-to-end natural language processing (NLP) tasks, in which semantic role labeling (SRL) and machine translation (MT) are the representative NLP tasks
Modern transformer-based language models are revolutionizing NLP. However, existing studies into language modelling with BERT have been mostly limited to English-language material and do not pay enough attention to the implicit knowledge of language,
Recent years has witnessed the remarkable success in end-to-end task-oriented dialog system, especially when incorporating external knowledge information. However, the quality of most existing models' generated response is still limited, mainly due t
Successful conversational search systems can present natural, adaptive and interactive shopping experience for online shopping customers. However, building such systems from scratch faces real word challenges from both imperfect product schema/knowle