مجردة ملكية مرغوبة لمتري التقييم المرجعي تقيس جودة محتوى الملخص هو أنه ينبغي أن يقدر مقدار المعلومات التي لدى الملخص مشتركا مع مرجع. لا يتداخل النص التقليدي المقاييس المستندة إلى النص مثل Rouge لتحقيق ذلك لأنهم يقتصرون على مطابقة الرموز، إما متعمدة أو عبر Embeddings. في هذا العمل، نقترح متريا لتقييم جودة المحتوى الخاص بملخص باستخدام الإجابة على الأسئلة (QA). تقيس الأساليب المستندة إلى ضمان الجودة مباشرة معلومات الملخص تتداخل مع مرجع، مما يجعلها مختلفة بشكل أساسي عن مقاييس تداخل النص. نوضح الفوائد التجريبية للمقاييس القائم على ضمان الجودة من خلال تحليل لميبري مقترح، Qaeval. تتفوق Qaeval على مقاييس حديثة حديثة على معظم التقييمات باستخدام مجموعات البيانات القياسية، في حين أن تكون قادرة على المنافسة على الآخرين بسبب قيود النماذج الحديثة. من خلال تحليل دقيق لكل مكون من مكونات Qaeval، نحدد اختناقات أدائها وتقدير أن أدائها المحتمل للأعلى من المحتمل يفوق جميع المقاييس التلقائية الأخرى، مما يقترب من طريقة الهرم الذهبي القياسي
Abstract A desirable property of a reference-based evaluation metric that measures the content quality of a summary is that it should estimate how much information that summary has in common with a reference. Traditional text overlap based metrics such as ROUGE fail to achieve this because they are limited to matching tokens, either lexically or via embeddings. In this work, we propose a metric to evaluate the content quality of a summary using question-answering (QA). QA-based methods directly measure a summary's information overlap with a reference, making them fundamentally different than text overlap metrics. We demonstrate the experimental benefits of QA-based metrics through an analysis of our proposed metric, QAEval. QAEval outperforms current state-of-the-art metrics on most evaluations using benchmark datasets, while being competitive on others due to limitations of state-of-the-art models. Through a careful analysis of each component of QAEval, we identify its performance bottlenecks and estimate that its potential upper-bound performance surpasses all other automatic metrics, approaching that of the gold-standard Pyramid Method.1
References used
https://aclanthology.org/
The evaluation of question answering models compares ground-truth annotations with model predictions. However, as of today, this comparison is mostly lexical-based and therefore misses out on answers that have no lexical overlap but are still semanti
This study describes the development of a Portuguese Community-Question Answering benchmark in the domain of Diabetes Mellitus using a Recognizing Question Entailment (RQE) approach. Given a premise question, RQE aims to retrieve semantically similar
NLP research in Hebrew has largely focused on morphology and syntax, where rich annotated datasets in the spirit of Universal Dependencies are available. Semantic datasets, however, are in short supply, hindering crucial advances in the development o
A question answering system that in addition to providing an answer provides an explanation of the reasoning that leads to that answer has potential advantages in terms of debuggability, extensibility, and trust. To this end, we propose QED, a lingui
Recent studies have shown that prompts improve the performance of large pre-trained language models for few-shot text classification. Yet, it is unclear how the prompting knowledge can be transferred across similar NLP tasks for the purpose of mutual