في توليد استجابة الحوار مفتوح المجال، يمكن أن يستمر سياق الحوار مع ردود متنوعة، وينبغي أن تتخذ طرازات الحوار علاقات واحدة إلى كثيرة.في هذا العمل، نقوم أولا بتحليل الهدف التدريبي لنماذج الحوار من وجهة نظر اختلاف Kullback-Leibler (KLD) وإظهار أن الفجوة بين توزيع الاحتمالات العالمي الحقيقي وتوزيع احتمالية البيانات المرجعية الفردية يمنع النموذج من تعلم الواحدإلى العديد من العلاقات بكفاءة.ثم نستكشف النهج للتدريب متعدد الإشارة في جوانبين.البيانات الحكيمة، ونحن نولد إشارات زائفة متنوعة من نموذج قوي مسبقا لبناء بيانات متعددة المرجعين توفر تقريب أفضل لتوزيع العالم الحقيقي.نموذج الحكمة، نقترح تجهيز نماذج مختلفة مع تعبيري قبل التعبير، اسمه Linear Gaussian النموذج (LGM).تظهر النتائج التجريبية للتقييم الآلي والتقييم البشري أن الطرق تسفر عن تحسينات كبيرة على أساس الأساس.
In open-domain dialogue response generation, a dialogue context can be continued with diverse responses, and the dialogue models should capture such one-to-many relations. In this work, we first analyze the training objective of dialogue models from the view of Kullback-Leibler divergence (KLD) and show that the gap between the real world probability distribution and the single-referenced data's probability distribution prevents the model from learning the one-to-many relations efficiently. Then we explore approaches to multi-referenced training in two aspects. Data-wise, we generate diverse pseudo references from a powerful pretrained model to build multi-referenced data that provides a better approximation of the real-world distribution. Model-wise, we propose to equip variational models with an expressive prior, named linear Gaussian model (LGM). Experimental results of automated evaluation and human evaluation show that the methods yield significant improvements over baselines.
References used
https://aclanthology.org/
For a computer to naturally interact with a human, it needs to be human-like. In this paper, we propose a neural response generation model with multi-task learning of generation and classification, focusing on emotion. Our model based on BART (Lewis
Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario.In real human dialogue, there are many appropriate responses for t
Variational autoencoders have been studied as a promising approach to model one-to-many mappings from context to response in chat response generation. However, they often fail to learn proper mappings. One of the reasons for this failure is the discr
Conditioned dialogue generation suffers from the scarcity of labeled responses. In this work, we exploit labeled non-dialogue text data related to the condition, which are much easier to collect. We propose a multi-task learning approach to leverage
Humans use commonsense reasoning (CSR) implicitly to produce natural and coherent responses in conversations. Aiming to close the gap between current response generation (RG) models and human communication abilities, we want to understand why RG mode