حقق توليد الحوار المدرج في المعرفة أدائا واعدا بمشاركة مصادر المعرفة الخارجية. عادة ما تؤدي الأساليب النموذجية نحو هذه المهمة مهام فرعية مستقلة نسبيا، أي اختيار المعرفة وتوليد الاستجابة على علم المعرفة. في هذه الورقة، من أجل تحسين تنوع كل من مجموعة مختارة المعرفة وتوليد الاستجابة على علم المعرفة، نقترح نموذجا متعاونا للمتغير الكامن (COLV) لدمج هذين الجانبين في وقت واحد في المساحات الكامنة المنفصلة والتعاونية، وذلك لالتقاط الأصيت الارتباط بين اختيار المعرفة وتوليد الاستجابة. أثناء الجيل، يرسم نموذجنا المقترح مرشح المعرفة أولا من المساحة الكامنة المكيفة في سياق الحوار، ثم عينات استجابة من مساحة كامنة تعاونية أخرى مشروطة بكل من السياق والمعرفة المختارة. تظهر النتائج التجريبية على مجموعات بيانات الحوار المستخدمة على نطاق واسع على نطاق واسع أن طرازنا يفوق على الأساليب السابقة على كل من اختيار المعرفة وتوليد الاستجابة.
Knowledge-grounded dialogue generation has achieved promising performance with the engagement of external knowledge sources. Typical approaches towards this task usually perform relatively independent two sub-tasks, i.e., knowledge selection and knowledge-aware response generation. In this paper, in order to improve the diversity of both knowledge selection and knowledge-aware response generation, we propose a collaborative latent variable (CoLV) model to integrate these two aspects simultaneously in separate yet collaborative latent spaces, so as to capture the inherent correlation between knowledge selection and response generation. During generation, our proposed model firstly draws knowledge candidate from the latent space conditioned on the dialogue context, and then samples a response from another collaborative latent space conditioned on both the context and the selected knowledge. Experimental results on two widely-used knowledge-grounded dialogue datasets show that our model outperforms previous methods on both knowledge selection and response generation.
References used
https://aclanthology.org/
Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; ho
Neural conversation models have shown great potentials towards generating fluent and informative responses by introducing external background knowledge. Nevertheless, it is laborious to construct such knowledge-grounded dialogues, and existing models
Curriculum learning, a machine training strategy that feeds training instances to the model from easy to hard, has been proven to facilitate the dialogue generation task. Meanwhile, knowledge distillation, a knowledge transformation methodology among
In open-domain dialogue response generation, a dialogue context can be continued with diverse responses, and the dialogue models should capture such one-to-many relations. In this work, we first analyze the training objective of dialogue models from
Although exposure bias has been widely studied in some NLP tasks, it faces its unique challenges in dialogue response generation, the representative one-to-various generation scenario.In real human dialogue, there are many appropriate responses for t