نحن نعتبر مشكلة تلخيص المبشير الذي تركز على الموضوع، حيث يكون الهدف هو إنشاء ملخص إغراق يركز على موضوع معين، عبارة واحدة أو عدة كلمات.نحن نفترض أن مهمة توليد ملخصات تركز على موضوع يمكن تحسينها عن طريق إظهار النموذج ما يجب ألا تركز عليه.نقدم نهج تعليمي عميق لتعزيز التلخصات المبخرية التي تركز على الموضوع، تدربت على المكافآت مع خط الأساس من الأمثلة السلبية الجديدة.نحن نحدد المدخلات في هذه المشكلة كنص المصدر الذي سبقه الموضوع.نحن نتكيف مع بيانات CNN-Daily Mail و Summarization New York Times Farmarization لهذه المهمة.ثم نوضح بعد ذلك من خلال تجارب في المكافآت الحالية أن استخدام خط الأساس للمثال السلبي يمكن أن يتفوق على استخدام خط الأساس الحرج الذاتي، في روج، برث، مقاييس التقييم البشري.
We consider the problem of topic-focused abstractive summarization, where the goal is to generate an abstractive summary focused on a particular topic, a phrase of one or multiple words. We hypothesize that the task of generating topic-focused summaries can be improved by showing the model what it must not focus on. We introduce a deep reinforcement learning approach to topic-focused abstractive summarization, trained on rewards with a novel negative example baseline. We define the input in this problem as the source text preceded by the topic. We adapt the CNN-Daily Mail and New York Times summarization datasets for this task. We then show through experiments on existing rewards that the use of a negative example baseline can outperform the use of a self-critical baseline, in Rouge, BERTScore, and human evaluation metrics.
References used
https://aclanthology.org/
With the increasing abundance of meeting transcripts, meeting summary has attracted more and more attention from researchers. The unsupervised pre-training method based on transformer structure combined with fine-tuning of downstream tasks has achiev
In this paper, we focus on improving the quality of the summary generated by neural abstractive dialogue summarization systems. Even though pre-trained language models generate well-constructed and promising results, it is still challenging to summar
Unlike well-structured text, such as news reports and encyclopedia articles, dialogue content often comes from two or more interlocutors, exchanging information with each other. In such a scenario, the topic of a conversation can vary upon progressio
In most cases, the lack of parallel corpora makes it impossible to directly train supervised models for the text style transfer task. In this paper, we explore training algorithms that instead optimize reward functions that explicitly consider differ
Empathy is a complex cognitive ability based on the reasoning of others' affective states. In order to better understand others and express stronger empathy in dialogues, we argue that two issues must be tackled at the same time: (i) identifying whic