ترغب بنشر مسار تعليمي؟ اضغط هنا

Few Shot Dialogue State Tracking using Meta-learning

94   0   0.0 ( 0 )
 نشر من قبل Saket Dingliwal
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Dialogue State Tracking (DST) forms a core component of automated chatbot based systems designed for specific goals like hotel, taxi reservation, tourist information, etc. With the increasing need to deploy such systems in new domains, solving the problem of zero/few-shot DST has become necessary. There has been a rising trend for learning to transfer knowledge from resource-rich domains to unknown domains with minimal need for additional data. In this work, we explore the merits of meta-learning algorithms for this transfer and hence, propose a meta-learner D-REPTILE specific to the DST problem. With extensive experimentation, we provide clear evidence of benefits over conventional approaches across different domains, methods, base models, and datasets with significant (5-25%) improvement over the baseline in a low-data setting. Our proposed meta-learner is agnostic of the underlying model and hence any existing state-of-the-art DST system can improve its performance on unknown domains using our training strategy.

قيم البحث

اقرأ أيضاً

As the creation of task-oriented conversational data is costly, data augmentation techniques have been proposed to create synthetic data to improve model performance in new domains. Up to now, these learning-based techniques (e.g. paraphrasing) still require a moderate amount of data, making application to low-resource settings infeasible. To tackle this problem, we introduce an augmentation framework that creates synthetic task-oriented dialogues, operating with as few as 5 shots. Our framework utilizes belief state annotations to define dialogue functions of each turn pair. It then creates templates of pairs through de-lexicalization, where the dialogue function codifies the allowable incoming and outgoing links of each template. To generate new dialogues, our framework composes allowable adjacent templates in a bottom-up manner. We evaluate our framework using TRADE as the base DST model, observing significant improvements in the fine-tuning scenarios within a low-resource setting. We conclude that this end-to-end dialogue augmentation framework can be a practical tool for natural language understanding performance in emerging task-oriented dialogue domains.
Zero-shot transfer learning for dialogue state tracking (DST) enables us to handle a variety of task-oriented dialogue domains without the expense of collecting in-domain data. In this work, we propose to transfer the textit{cross-task} knowledge fro m general question answering (QA) corpora for the zero-shot DST task. Specifically, we propose TransferQA, a transferable generative QA model that seamlessly combines extractive QA and multi-choice QA via a text-to-text transformer framework, and tracks both categorical slots and non-categorical slots in DST. In addition, we introduce two effective ways to construct unanswerable questions, namely, negative question sampling and context truncation, which enable our model to handle none value slots in the zero-shot DST setting. The extensive experiments show that our approaches substantially improve the existing zero-shot and few-shot results on MultiWoz. Moreover, compared to the fully trained baseline on the Schema-Guided Dialogue dataset, our approach shows better generalization ability in unseen domains.
Zero-shot transfer learning for multi-domain dialogue state tracking can allow us to handle new domains without incurring the high cost of data acquisition. This paper proposes new zero-short transfer learning technique for dialogue state tracking wh ere the in-domain training data are all synthesized from an abstract dialogue model and the ontology of the domain. We show that data augmentation through synthesized data can improve the accuracy of zero-shot learning for both the TRADE model and the BERT-based SUMBT model on the MultiWOZ 2.1 dataset. We show training with only synthesized in-domain data on the SUMBT model can reach about 2/3 of the accuracy obtained with the full training dataset. We improve the zero-shot learning state of the art on average across domains by 21%.
We present our work on Track 4 in the Dialogue System Technology Challenges 8 (DSTC8). The DSTC8-Track 4 aims to perform dialogue state tracking (DST) under the zero-shot settings, in which the model needs to generalize on unseen service APIs given a schema definition of these target APIs. Serving as the core for many virtual assistants such as Siri, Alexa, and Google Assistant, the DST keeps track of the users goal and what happened in the dialogue history, mainly including intent prediction, slot filling, and user state tracking, which tests models ability of natural language understanding. Recently, the pretrained language models have achieved state-of-the-art results and shown impressive generalization ability on various NLP tasks, which provide a promising way to perform zero-shot learning for language understanding. Based on this, we propose a schema-guided paradigm for zero-shot dialogue state tracking (SGP-DST) by fine-tuning BERT, one of the most popular pretrained language models. The SGP-DST system contains four modules for intent prediction, slot prediction, slot transfer prediction, and user state summarizing respectively. According to the official evaluation results, our SGP-DST (team12) ranked 3rd on the joint goal accuracy (primary evaluation metric for ranking submissions) and 1st on the requsted slots F1 among 25 participant teams.
Zero-shot cross-domain dialogue state tracking (DST) enables us to handle task-oriented dialogue in unseen domains without the expense of collecting in-domain data. In this paper, we propose a slot description enhanced generative approach for zero-sh ot cross-domain DST. Specifically, our model first encodes dialogue context and slots with a pre-trained self-attentive encoder, and generates slot values in an auto-regressive manner. In addition, we incorporate Slot Type Informed Descriptions that capture the shared information across slots to facilitate cross-domain knowledge transfer. Experimental results on the MultiWOZ dataset show that our proposed method significantly improves existing state-of-the-art results in the zero-shot cross-domain setting.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا