غالبا ما تستخدم أنظمة المحادثة الموجهة نحو المهام تتبع حالة الحوار لتمثيل نوايا المستخدم، والتي تنطوي على ملء قيم فتحات محددة مسبقا.تم اقتراح العديد من النهج، وغالبا ما تستخدم الهندسة المعنية بمهام المهام مع مصنفات ذات الأغراض الخاصة.في الآونة الأخيرة، تم الحصول على نتائج جيدة باستخدام هياكل عامة أكثر بناء على نماذج اللغة المحددة مسبقا.هنا، نقدم اختلافا جديدا لنهج نمذجة اللغة التي تستخدم مطالبة مخطط مدفوعة بتوفير ترميز التاريخ على علم المهام المستخدمة لكل من الفتحات الفئوية وغير القشرية.ونحن كذلك تحسين الأداء من خلال زيادة المطالبة بأوصاف المخطط، وهو مصدر حدوث طبيعي للمعرفة داخل المجال.لدينا نظام التوليد البحت يحقق الأداء الحديثة في MultiWoz 2.2 وتحقق أداء تنافسي على اثنين من المعايير الأخرى: MultiWoz 2.1 و M2M.ستكون البيانات والرمز متاحة في https://github.com/chiahsuan156/dst-as-prompting.
Task-oriented conversational systems often use dialogue state tracking to represent the user's intentions, which involves filling in values of pre-defined slots. Many approaches have been proposed, often using task-specific architectures with special-purpose classifiers. Recently, good results have been obtained using more general architectures based on pretrained language models. Here, we introduce a new variation of the language modeling approach that uses schema-driven prompting to provide task-aware history encoding that is used for both categorical and non-categorical slots. We further improve performance by augmenting the prompting with schema descriptions, a naturally occurring source of in-domain knowledge. Our purely generative system achieves state-of-the-art performance on MultiWOZ 2.2 and achieves competitive performance on two other benchmarks: MultiWOZ 2.1 and M2M. The data and code will be available at https://github.com/chiahsuan156/DST-as-Prompting.
References used
https://aclanthology.org/
Abstract Tracking dialogue states to better interpret user goals and feed downstream policy learning is a bottleneck in dialogue management. Common practice has been to treat it as a problem of classifying dialogue content into a set of pre-defined s
Frame-based state representation is widely used in modern task-oriented dialog systems to model user intentions and slot values. However, a fixed design of domain ontology makes it difficult to extend to new services and APIs. Recent work proposed to
Sequence-to-sequence models have been applied to a wide variety of NLP tasks, but how to properly use them for dialogue state tracking has not been systematically investigated. In this paper, we study this problem from the perspectives of pre-trainin
Recently, the focus of dialogue state tracking has expanded from single domain to multiple domains. The task is characterized by the shared slots between domains. As the scenario gets more complex, the out-of-vocabulary problem also becomes severer.
Zero-shot transfer learning for dialogue state tracking (DST) enables us to handle a variety of task-oriented dialogue domains without the expense of collecting in-domain data. In this work, we propose to transfer the cross-task knowledge from genera