لكل مهمة حوار موجهة نحو تحقيق الأهداف ذات أهمية، يجب جمع كميات كبيرة من البيانات للحصول على التعلم المنتهي للنظام الحوار العصبي.جمع هذه البيانات هي عملية مكلفة وتستغرق وقتا طويلا.بدلا من ذلك، نوضح أنه يمكننا استخدام كمية صغيرة فقط من البيانات، والتي تستكمل البيانات من مهمة حوار ذات صلة.فشل التعلم بسذاجة من البيانات ذات الصلة في تحسين الأداء لأن البيانات ذات الصلة يمكن أن تكون غير متسقة مع المهمة المستهدفة.نحن نصف طريقة تعتمد على التعلم التعريفي والتي تتعلم بشكل انتقائي من بيانات مهمة الحوار ذات الصلة.نهجنا يؤدي إلى تحسينات بدقة كبيرة في مهمة الحوار مثال.
For each goal-oriented dialog task of interest, large amounts of data need to be collected for end-to-end learning of a neural dialog system. Collecting that data is a costly and time-consuming process. Instead, we show that we can use only a small amount of data, supplemented with data from a related dialog task. Naively learning from related data fails to improve performance as the related data can be inconsistent with the target task. We describe a meta-learning based method that selectively learns from the related dialog task data. Our approach leads to significant accuracy improvements in an example dialog task.
References used
https://aclanthology.org/
We propose a novel problem within end-to-end learning of task oriented dialogs (TOD), in which the dialog system mimics a troubleshooting agent who helps a user by diagnosing their problem (e.g., car not starting). Such dialogs are grounded in domain
The paradigm of leveraging large pre-trained language models has made significant progress on benchmarks on task-oriented dialogue (TOD) systems. In this paper, we combine this paradigm with multi-task learning framework for end-to-end TOD modeling b
Generative models for dialog systems have gained much interest because of the recent success of RNN and Transformer based models in tasks like question answering and summarization. Although the task of dialog response generation is generally seen as
End-to-end approaches for sequence tasks are becoming increasingly popular. Yet for complex sequence tasks, like speech translation, systems that cascade several models trained on sub-tasks have shown to be superior, suggesting that the compositional
Incorporating knowledge bases (KB) into end-to-end task-oriented dialogue systems is challenging, since it requires to properly represent the entity of KB, which is associated with its KB context and dialogue context. The existing works represent the