لا يزال تقييم التلخيص مشكلة بحث مفتوحة: من المعروف أن المقاييس الحالية مثل الحمر محدودة وربطها بشكل سيء بأحكام بشرية.لتخفيف هذه المسألة، اقترحت العمل الحديث مقاييس التقييم التي تعتمد على الأسئلة في الإجابة على النماذج لتقييم ما إذا كان الملخص يحتوي على جميع المعلومات ذات الصلة في وثيقتها المصدر.على الرغم من الواعدة، إلا أن النهج المقترحة فشلت حتى الآن في الارتباط بشكل أفضل من الحمر بأحكام بشرية.في هذه الورقة، نقدم النهج السابقة واقتراح إطار موحد، يدعى Questeval.على عكس مقاييس ثابتة مثل Rouge أو Bertscore، لا يتطلب Questeval أي مرجع حقيقي في الحقيقة.ومع ذلك، فإن Questeval يحسن بشكل كبير من الارتباط بالأحكام البشرية على أربع أبعاد تقييم (الاتساق والتماسك والطلاقة والأهمية)، كما هو مبين في تجارب واسعة النطاق.
Summarization evaluation remains an open research problem: current metrics such as ROUGE are known to be limited and to correlate poorly with human judgments. To alleviate this issue, recent work has proposed evaluation metrics which rely on question answering models to assess whether a summary contains all the relevant information in its source document. Though promising, the proposed approaches have so far failed to correlate better than ROUGE with human judgments. In this paper, we extend previous approaches and propose a unified framework, named QuestEval. In contrast to established metrics such as ROUGE or BERTScore, QuestEval does not require any ground-truth reference. Nonetheless, QuestEval substantially improves the correlation with human judgments over four evaluation dimensions (consistency, coherence, fluency, and relevance), as shown in extensive experiments.
References used
https://aclanthology.org/
ROUGE is a widely used evaluation metric in text summarization. However, it is not suitable for the evaluation of abstractive summarization systems as it relies on lexical overlap between the gold standard and the generated summaries. This limitation
Tables provide valuable knowledge that can be used to verify textual statements. While a number of works have considered table-based fact verification, direct alignments of tabular data with tokens in textual statements are rarely available. Moreover
QuestEval is a reference-less metric used in text-to-text tasks, that compares the generated summaries directly to the source text, by automatically asking and answering questions. Its adaptation to Data-to-Text tasks is not straightforward, as it re
Reference-based automatic evaluation metrics are notoriously limited for NLG due to their inability to fully capture the range of possible outputs. We examine a referenceless alternative: evaluating the adequacy of English sentences generated from Ab
The task of verifying the truthfulness of claims in textual documents, or fact-checking, has received significant attention in recent years. Many existing evidence-based factchecking datasets contain synthetic claims and the models trained on these d