على الرغم من التقدم الكبير في تلخيص الجماع العصبي، أظهرت الدراسات الحديثة أن النماذج الحالية عرضة لإنشاء ملخصات غير مخلصة للسياق الأصلي. لمعالجة المشكلة، نقوم بدراسة توليد واختيار مرشح النقيض كتقنية نطاقات ما بعد المعالجة النموذجية لتصحيح الهلوسة الخارجية (I.E. المعلومات غير موجودة في نص المصدر) في ملخصات غير مخلصة. نتعلم نموذج تصحيح تمييزي عن طريق توليد ملخصات مرشحة بديلة حيث يتم استبدال الكيانات والكميات المسماة في الملخص الذي تم إنشاؤه بأشياء مع أنواع دلالية متوافقة من المستند المصدر. ثم يتم استخدام هذا النموذج لتحديد أفضل مرشح كملخص الناتج النهائي. تبين تجاربنا وتحليلنا عبر عدد من أنظمة التلخيص العصبية أن طريقةنا المقترحة فعالة في تحديد وتصحيح الهلوسة الخارجية. نقوم بتحليل ظاهرة الهلوسة النموذجية لأنواع مختلفة من أنظمة التلخيص العصبية، ونأمل أن تقدم رؤى للعمل في المستقبل على الاتجاه.
Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source text) in unfaithful summaries. We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document. This model is then used to select the best candidate as the final output summary. Our experiments and analysis across a number of neural summarization systems show that our proposed method is effective in identifying and correcting extrinsic hallucinations. We analyze the typical hallucination phenomenon by different types of neural summarization systems, in hope to provide insights for future work on the direction.
References used
https://aclanthology.org/
We study generating abstractive summaries that are faithful and factually consistent with the given articles. A novel contrastive learning formulation is presented, which leverages both reference summaries, as positive training data, and automaticall
Large scale pretrained models have demonstrated strong performances on several natural language generation and understanding benchmarks. However, introducing commonsense into them to generate more realistic text remains a challenge. Inspired from pre
Modern summarization models generate highly fluent but often factually unreliable outputs. This motivated a surge of metrics attempting to measure the factuality of automatically generated summaries. Due to the lack of common benchmarks, these metric
Repetition in natural language generation reduces the informativeness of text and makes it less appealing. Various techniques have been proposed to alleviate it. In this work, we explore and propose techniques to reduce repetition in abstractive summ
In this paper, we study the abstractive sentence summarization. There are two essential information features that can influence the quality of news summarization, which are topic keywords and the knowledge structure of the news text. Besides, the exi