نقترح إجراء تقييم جودة خاص بالرجوع إلى مرجعية، مع التركيز على الإخلاص.يعتمد الإجراء على إيجاد وعد جميع التناقضات المحتملة المحتملة في الملخص فيما يتعلق بالوثيقة المصدر.يرتبط مؤشر الإستيم المقترح ومقدر عدم تناسق الملخص من خلال المدينات غير المعطاة بدرجات الخبراء في مجموعة بيانات STOMEVAL للمستوى الملخص أقوى من تدابير التقييم المشتركة الأخرى ليس فقط في الاتساق ولكن أيضا في الطلاقة.نقدم أيضا طريقة لتوليد أخطاء واقعية خفية في ملخصات بشرية.نظهر أن ESTIME أكثر حساسية للأخطاء الدقيقة من تدابير التقييم المشتركة الأخرى.
We propose a new reference-free summary quality evaluation measure, with emphasis on the faithfulness. The measure is based on finding and counting all probable potential inconsistencies of the summary with respect to the source document. The proposed ESTIME, Estimator of Summary-to-Text Inconsistency by Mismatched Embeddings, correlates with expert scores in summary-level SummEval dataset stronger than other common evaluation measures not only in Consistency but also in Fluency. We also introduce a method of generating subtle factual errors in human summaries. We show that ESTIME is more sensitive to subtle errors than other common evaluation measures.
References used
https://aclanthology.org/
We propose to tackle data-to-text generation tasks by directly splicing together retrieved segments of text from neighbor'' source-target pairs. Unlike recent work that conditions on retrieved neighbors but generates text token-by-token, left-to-righ
Being able to accurately perform Question Difficulty Estimation (QDE) can improve the accuracy of students' assessment and better their learning experience. Traditional approaches to QDE are either subjective or introduce a long delay before new ques
With the growing popularity of smart speakers, such as Amazon Alexa, speech is becoming one of the most important modes of human-computer interaction. Automatic speech recognition (ASR) is arguably the most critical component of such systems, as erro
Many existing approaches for interpreting text classification models focus on providing importance scores for parts of the input text, such as words, but without a way to test or improve the interpretation method itself. This has the effect of compou
We present GeSERA, an open-source improved version of SERA for evaluating automatic extractive and abstractive summaries from the general domain. SERA is based on a search engine that compares candidate and reference summaries (called queries) agains