إن دمج قواعد المعرفة (KB) في أنظمة الحوار الموجهة نحو المهام الواحد أمرا صعبا، لأنها تتطلب تمثيل كيان KB بشكل صحيح، وهو مرتبط بسياق KB وحالات الحوار. تمثل الأعمال الحالية الكيان مع إدراك جزء من سياق KB فقط، والذي يمكن أن يؤدي إلى تمثيل أقل فعالية بسبب فقدان المعلومات، ويلفح سلبا من أجل تناسبي KB وتوليد الاستجابة. لمعالجة هذه المشكلة، نستكشف من السياق بالكامل عن تمثيل الكيان من خلال إدراك جميع الكيانات والحوار ذات الصلة ديناميكيا. لتحقيق ذلك، نقترح، نقترح إطار محول محول في الذاكرة المعززة بالذاكرة (المذنب)، والتي تعامل KB كسلسلة وتزايد قناع ذاكرة جديدة لفرض الكيان على التركيز فقط على كياناتها ذات الصلة وحوار التاريخ، مع تجنب الهاء من الكيانات غير ذات الصلة. من خلال تجارب واسعة، نوضح أن إطار المنزول لدينا يمكن أن يحقق أداء فائقا على حالة الآداب.
Incorporating knowledge bases (KB) into end-to-end task-oriented dialogue systems is challenging, since it requires to properly represent the entity of KB, which is associated with its KB context and dialogue context. The existing works represent the entity with only perceiving a part of its KB context, which can lead to the less effective representation due to the information loss, and adversely favor KB reasoning and response generation. To tackle this issue, we explore to fully contextualize the entity representation by dynamically perceiving all the relevant entities and dialogue history. To achieve this, we propose a COntext-aware Memory Enhanced Transformer framework (COMET), which treats the KB as a sequence and leverages a novel Memory Mask to enforce the entity to only focus on its relevant entities and dialogue history, while avoiding the distraction from the irrelevant entities. Through extensive experiments, we show that our COMET framework can achieve superior performance over the state of the arts.
References used
https://aclanthology.org/
Generative models for dialog systems have gained much interest because of the recent success of RNN and Transformer based models in tasks like question answering and summarization. Although the task of dialog response generation is generally seen as
The paradigm of leveraging large pre-trained language models has made significant progress on benchmarks on task-oriented dialogue (TOD) systems. In this paper, we combine this paradigm with multi-task learning framework for end-to-end TOD modeling b
Recent years has witnessed the remarkable success in end-to-end task-oriented dialog system, especially when incorporating external knowledge information. However, the quality of most existing models' generated response is still limited, mainly due t
Continual learning in task-oriented dialogue systems allows the system to add new domains and functionalities overtime after deployment, without incurring the high cost of retraining the whole system each time. In this paper, we propose a first-ever
Dialogue policy optimisation via reinforcement learning requires a large number of training interactions, which makes learning with real users time consuming and expensive. Many set-ups therefore rely on a user simulator instead of humans. These user