نماذج التلخيص الحديثة تولد بطلاقة للغاية ولكن في كثير من الأحيان مخرجات غير موثوق بها في كثير من الأحيان.هذه الدافع الطفرة من المقاييس التي تحاول قياس واقعية الملخصات التي تم إنشاؤها تلقائيا.نظرا لعدم وجود معايير مشتركة، لا يمكن مقارنة هذه المقاييس.علاوة على ذلك، فإن كل هذه الطرق تعالج الواقعية كمفهوم ثنائي وفشل في توفير رؤى أعمق على أنواع التناقضات التي أدلى بها أنظمة مختلفة.لمعالجة هذه القيود، نرتند نماذج من الأخطاء الواقعية واستخدامها لجمع التعليقات التوضيحية الإنسانية من الملخصات التي تم إنشاؤها من أنظمة التلخيص الحديثة عن البيانات الخاصة ب CNN / DM و XSUM.من خلال هذه التعليقات التوضيحية، نحدد نسبة الفئات المختلفة للأخطاء الواقعية ومقاييس التقويمات القياسية، والتي تبين ارتباطها بالحكم البشري بالإضافة إلى نقاط القوة والضعف المحددة.
Modern summarization models generate highly fluent but often factually unreliable outputs. This motivated a surge of metrics attempting to measure the factuality of automatically generated summaries. Due to the lack of common benchmarks, these metrics cannot be compared. Moreover, all these methods treat factuality as a binary concept and fail to provide deeper insights on the kinds of inconsistencies made by different systems. To address these limitations, we devise a typology of factual errors and use it to collect human annotations of generated summaries from state-of-the-art summarization systems for the CNN/DM and XSum datasets. Through these annotations we identify the proportion of different categories of factual errors and benchmark factuality metrics, showing their correlation with human judgement as well as their specific strengths and weaknesses.
References used
https://aclanthology.org/
We study generating abstractive summaries that are faithful and factually consistent with the given articles. A novel contrastive learning formulation is presented, which leverages both reference summaries, as positive training data, and automaticall
Data-to-text generation systems are trained on large datasets, such as WebNLG, Ro-toWire, E2E or DART. Beyond traditional token-overlap evaluation metrics (BLEU or METEOR), a key concern faced by recent generators is to control the factuality of the
Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate gener
Recent pre-trained abstractive summarization systems have started to achieve credible performance, but a major barrier to their use in practice is their propensity to output summaries that are not faithful to the input and that contain factual errors
Repetition in natural language generation reduces the informativeness of text and makes it less appealing. Various techniques have been proposed to alleviate it. In this work, we explore and propose techniques to reduce repetition in abstractive summ