توليد الاستجابات الإعلامية والمناسبة صعبة ولكنها مهمة لبناء أنظمة الحوار يشبه الإنسان. على الرغم من أن نماذج المحادثة المختلفة المعرفة قد اقترحت، إلا أن هذه النماذج لها قيود في الاستفادة من المعرفة التي تحدث بشكل غير منتظم في بيانات التدريب، ناهيك عن دمج المعرفة غير المرئية في جيل المحادثة. في هذه الورقة، نقترح طريقة تعلم التمثيل المتعصب للكيان (EARL) لإدخال الرسوم البيانية المعرفة لتوليد المحادثة بالمعلومات. على عكس الأساليب التقليدية التي تقترب المعلمة التمثيل المحدد لكل كيان، فإن إيرل يستخدم سياق المحادثات والهيكل العلائقي لرسوم البيان البيئية لمعرفة تمثيل الفئة للكيانات، المعمم لإدماج كيانات غير مرئية في الرسوم البيانية المعرفة في جيل المحادثة. التقييمات التلقائية واليدوية توضح أن طرازنا يمكن أن يولد ردود أكثر إعلامية ومتماسكة وغير طبيعية من النماذج الأساسية.
Generating informative and appropriate responses is challenging but important for building human-like dialogue systems. Although various knowledge-grounded conversation models have been proposed, these models have limitations in utilizing knowledge that infrequently occurs in the training data, not to mention integrating unseen knowledge into conversation generation. In this paper, we propose an Entity-Agnostic Representation Learning (EARL) method to introduce knowledge graphs to informative conversation generation. Unlike traditional approaches that parameterize the specific representation for each entity, EARL utilizes the context of conversations and the relational structure of knowledge graphs to learn the category representation for entities, which is generalized to incorporating unseen entities in knowledge graphs into conversation generation. Automatic and manual evaluations demonstrate that our model can generate more informative, coherent, and natural responses than baseline models.
References used
https://aclanthology.org/
Large-scale conversation models are turning to leveraging external knowledge to improve the factual accuracy in response generation. Considering the infeasibility to annotate the external knowledge for large-scale dialogue corpora, it is desirable to
Knowledge data are massive and widespread in the real-world, which can serve as good external sources to enrich conversations. However, in knowledge-grounded conversations, current models still lack the fine-grained control over knowledge selection a
Technologies for enhancing well-being, healthcare vigilance and monitoring are on the rise. However, despite patient interest, such technologies suffer from low adoption. One hypothesis for this limited adoption is loss of human interaction that is c
Exemplar-Guided Paraphrase Generation (EGPG) aims to generate a target sentence which conforms to the style of the given exemplar while encapsulating the content information of the source sentence. In this paper, we propose a new method with the goal
Abstractive summarization quality had large improvements since recent language pretraining techniques. However, currently there is a lack of datasets for the growing needs of conversation summarization applications. Thus we collected ForumSum, a dive