نقترح multionedoc2dial، مهمة جديدة ومجموعة بيانات على الحوارات الموجهة نحو الأهداف النمذجة في مستندات متعددة.يعمل معظم الأعمال السابقة على علاج النمذجة الحوار المحدد في المستندات كملقمة لفهم قراءة الآلة استنادا إلى وثيقة أو مقطع واحد معين.في هذا العمل، نهدف إلى معالجة سيناريوهات أكثر واقعية حيث تتضمن محادثة البحث عن المعلومات الموجهة نحو الأهداف موضوعات متعددة، وبالتالي يتم تقديمها على مستندات مختلفة.لتسهيل هذه المهمة، نقدم مجموعة بيانات جديدة تحتوي على حوارات ترتكز في مستندات متعددة من أربعة مجالات مختلفة.نحن نستكشف أيضا نمذجة السياقات القائمة على الحوار ومقرها المستندات في DataSet.نقدم نهج أساسية قوية ونتائج تجريبية مختلفة، تهدف إلى دعم المزيد من جهود البحث في هذه المهمة.
We propose MultiDoc2Dial, a new task and dataset on modeling goal-oriented dialogues grounded in multiple documents. Most previous works treat document-grounded dialogue modeling as machine reading comprehension task based on a single given document or passage. In this work, we aim to address more realistic scenarios where a goal-oriented information-seeking conversation involves multiple topics, and hence is grounded on different documents. To facilitate such task, we introduce a new dataset that contains dialogues grounded in multiple documents from four different domains. We also explore modeling the dialogue-based and document-based contexts in the dataset. We present strong baseline approaches and various experimental results, aiming to support further research efforts on such a task.
References used
https://aclanthology.org/
Large-scale conversation models are turning to leveraging external knowledge to improve the factual accuracy in response generation. Considering the infeasibility to annotate the external knowledge for large-scale dialogue corpora, it is desirable to
We propose a shared task on summarizing real-life scenario dialogues, DialogSum Challenge, to encourage researchers to address challenges in dialogue summarization, which has been less studied by the summarization community. Real-life scenario dialog
Knowledge-intensive tasks such as question answering often require assimilating information from different sections of large inputs such as books or article collections. We propose ReadTwice, a simple and effective technique that combines several str
A crucial difference between single- and multi-document summarization is how salient content manifests itself in the document(s). While such content may appear at the beginning of a single document, essential information is frequently reiterated in a
Smooth and effective communication requires the ability to perform latent or explicit commonsense inference. Prior commonsense reasoning benchmarks (such as SocialIQA and CommonsenseQA) mainly focus on the discriminative task of choosing the right an