يعاني تلخيص الحوار المبشور من وجود الكثير من الأخطاء الواقعية، والتي من المقرر أن تكون عناصر بارزة متناثرة في عملية تفاعل المعلومات متعددة المتكلم.في هذا العمل، نقوم بتصميم رسم بياني من الدلالات غير المتجانس مع قناع على مستوى الفتحات اعتقاديا لتعزيز ميزات الفتحة للحصول على ملخص أكثر صحة.نقترح أيضا خوارزمية البحث عن شعاع الدفع في عملية فك التشفير لإعطاء الأولوية لتوليد العناصر البارزة في طول محدود عن طريق ملء الفراغات ".علاوة على ذلك، يتم تقديم التعلم المتعرج المعزلي الذي يساعد عملية التدريب في عملية التدريب على تحيز التعرض.يؤدي الأداء التجريبي على أنواع مختلفة من الأخطاء الواقعية فعالية أساليبنا والتقييم البشري يتحقق من النتائج.
Abstractive dialogue summarization suffers from a lots of factual errors, which are due to scattered salient elements in the multi-speaker information interaction process. In this work, we design a heterogeneous semantic slot graph with a slot-level mask cross-attention to enhance the slot features for more correct summarization. We also propose a slot-driven beam search algorithm in the decoding process to give priority to generating salient elements in a limited length by filling-in-the-blanks''. Besides, an adversarial contrastive learning assisting the training process is introduced to alleviate the exposure bias. Experimental performance on different types of factual errors shows the effectiveness of our methods and human evaluation further verifies the results..
References used
https://aclanthology.org/
Repetition in natural language generation reduces the informativeness of text and makes it less appealing. Various techniques have been proposed to alleviate it. In this work, we explore and propose techniques to reduce repetition in abstractive summ
In this paper, we study the abstractive sentence summarization. There are two essential information features that can influence the quality of news summarization, which are topic keywords and the knowledge structure of the news text. Besides, the exi
This paper explores the effect of using multitask learning for abstractive summarization in the context of small training corpora. In particular, we incorporate four different tasks (extractive summarization, language modeling, concept detection, and
Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate gener
We study generating abstractive summaries that are faithful and factually consistent with the given articles. A novel contrastive learning formulation is presented, which leverages both reference summaries, as positive training data, and automaticall