Rouge هو متري تقييم واسع الاستخدام في تلخيص النص.ومع ذلك، فإنه غير مناسب لتقييم أنظمة تلخيص الجماع حيث تعتمد على التداخل المعجمي بين معيار الذهب والملخصات التي تم إنشاؤها.يصبح هذا القيد أكثر وضوحا للغات الشاقة مع المفردات الكبيرة جدا ونسب عالية النوع / الرمز المميز.في هذه الورقة، نقدم نماذج التشابه الدلالي لأتراك وتطبيقها كقائد تقييم لمهمة تلخيص مبادرة.لتحقيق ذلك، قامنا بترجمة مجموعة بيانات STSB الإنجليزية إلى تركية وعرضت بيانات التشابه الدلالي الأول للتركية أيضا.أظهرنا أن أفضل نماذج التشابه لدينا لها محاذاة أفضل مع الأحكام البشرية المتوسطة مقارنة بالحصان في كل من علاقات بيرسون ورأس.
ROUGE is a widely used evaluation metric in text summarization. However, it is not suitable for the evaluation of abstractive summarization systems as it relies on lexical overlap between the gold standard and the generated summaries. This limitation becomes more apparent for agglutinative languages with very large vocabularies and high type/token ratios. In this paper, we present semantic similarity models for Turkish and apply them as evaluation metrics for an abstractive summarization task. To achieve this, we translated the English STSb dataset into Turkish and presented the first semantic textual similarity dataset for Turkish as well. We showed that our best similarity models have better alignment with average human judgments compared to ROUGE in both Pearson and Spearman correlations.
References used
https://aclanthology.org/
Recently graph-based methods have been adopted for Abstractive Text Summarization. However, existing graph-based methods only consider either word relations or structure information, which neglect the correlation between them. To simultaneously captu
Semantic textual similarity (STS) systems estimate the degree of the meaning similarity between two sentences. Cross-lingual STS systems estimate the degree of the meaning similarity between two sentences, each in a different language. State-of-the-a
Summarization evaluation remains an open research problem: current metrics such as ROUGE are known to be limited and to correlate poorly with human judgments. To alleviate this issue, recent work has proposed evaluation metrics which rely on question
Neural abstractive summarization systems have gained significant progress in recent years. However, abstractive summarization often produce inconsisitent statements or false facts. How to automatically generate highly abstract yet factually correct s
Although abstractive summarization models have achieved impressive results on document summarization tasks, their performance on dialogue modeling is much less satisfactory due to the crude and straight methods for dialogue encoding. To address this