أظهر تعلم التعزيز العميق إمكانات كبيرة في سياسات الحوار التدريبية. ومع ذلك، فإن أدائها المواتي يأتي بتكلفة العديد من جولات التفاعل. تعتمد معظم أساليب سياسة الحوار الحالية على نظام تعليمي واحد، في حين أن الدماغ البشري يحتوي على نظامين لتعلم وذاكرة متخصصين، يدعمان لإيجاد حلول جيدة دون الحاجة إلى أمثلة غزيرة. مستوحاة من الدماغ البشري، تقترح هذه الورقة إطار عمل لتعلم السياسات التكميلي الرواية (CPL)، والتي تستغل المزايا التكميلية لسياسة الذاكرة العرضية (EM) وسياسة شبكة Q-Network (DQN) العميقة لتحقيق تعلم سياسة حوار سريعة وفعالة وبعد من أجل التنسيق بين السياسة، اقترحنا وحدة تحكم الثقة للسيطرة على الوقت التكميلي وفقا لفعولتها النسبية في مراحل مختلفة. علاوة على ذلك، يتم اقتراح اتصال الذاكرة وتقليم الوقت لضمان التعميم المرن والتكيف للسياسة EM في مهام الحوار. تظهر النتائج التجريبية على ثلاث مجموعات بيانات الحوار أن طريقتنا تتفوق بشكل كبير على الطرق الحالية التي تعتمد على نظام تعليمي واحد.
Deep reinforcement learning has shown great potential in training dialogue policies. However, its favorable performance comes at the cost of many rounds of interaction. Most of the existing dialogue policy methods rely on a single learning system, while the human brain has two specialized learning and memory systems, supporting to find good solutions without requiring copious examples. Inspired by the human brain, this paper proposes a novel complementary policy learning (CPL) framework, which exploits the complementary advantages of the episodic memory (EM) policy and the deep Q-network (DQN) policy to achieve fast and effective dialogue policy learning. In order to coordinate between the two policies, we proposed a confidence controller to control the complementary time according to their relative efficacy at different stages. Furthermore, memory connectivity and time pruning are proposed to guarantee the flexible and adaptive generalization of the EM policy in dialog tasks. Experimental results on three dialogue datasets show that our method significantly outperforms existing methods relying on a single learning system.
References used
https://aclanthology.org/
An intelligent dialogue system in a multi-turn setting should not only generate the responses which are of good quality, but it should also generate the responses which can lead to long-term success of the dialogue. Although, the current approaches i
It has been shown that training multi-task models with auxiliary tasks can improve the target task quality through cross-task transfer. However, the importance of each auxiliary task to the primary task is likely not known a priori. While the importa
Following the success of dot-product attention in Transformers, numerous approximations have been recently proposed to address its quadratic complexity with respect to the input length. While these variants are memory and compute efficient, it is not
The ability to identify and resolve uncertainty is crucial for the robustness of a dialogue system. Indeed, this has been confirmed empirically on systems that utilise Bayesian approaches to dialogue belief tracking. However, such systems consider on
Contrastive learning has been applied successfully to learn vector representations of text. Previous research demonstrated that learning high-quality representations benefits from batch-wise contrastive loss with a large number of negatives. In pract