يؤدي التكرار في جيل اللغة الطبيعية إلى تقليل معلومات النص ويجعله أقل جاذبية.تم اقتراح تقنيات مختلفة لتخفيفها.في هذا العمل، نستكشف واقتراح تقنيات للحد من التكرار في تلخيص مبادرة.أولا، نستكشف تطبيق التدريب غير المحامي وتضمين المصفوفين من العمل السابق على نمذجة اللغة إلى تلخيص مبادرة.بعد ذلك، نقوم بتوسيع التغطية وآليات الاهتمام الزمني إلى مستوى الرمز المميز للحد من التكرار.في تجاربنا على مجموعة بيانات CNN / Daily Mail، نلاحظ أن هذه التقنيات تقلل من مقدار التكرار وزيادة معلومات الإصلاحية من الملخصات، والتي نؤكد عن طريق التقييم البشري.
Repetition in natural language generation reduces the informativeness of text and makes it less appealing. Various techniques have been proposed to alleviate it. In this work, we explore and propose techniques to reduce repetition in abstractive summarization. First, we explore the application of unlikelihood training and embedding matrix regularizers from previous work on language modeling to abstractive summarization. Next, we extend the coverage and temporal attention mechanisms to the token level to reduce repetition. In our experiments on the CNN/Daily Mail dataset, we observe that these techniques reduce the amount of repetition and increase the informativeness of the summaries, which we confirm via human evaluation.
References used
https://aclanthology.org/
In this paper, we study the abstractive sentence summarization. There are two essential information features that can influence the quality of news summarization, which are topic keywords and the knowledge structure of the news text. Besides, the exi
Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate gener
We study generating abstractive summaries that are faithful and factually consistent with the given articles. A novel contrastive learning formulation is presented, which leverages both reference summaries, as positive training data, and automaticall
Abstractive dialogue summarization suffers from a lots of factual errors, which are due to scattered salient elements in the multi-speaker information interaction process. In this work, we design a heterogeneous semantic slot graph with a slot-level
This paper explores the effect of using multitask learning for abstractive summarization in the context of small training corpora. In particular, we incorporate four different tasks (extractive summarization, language modeling, concept detection, and