على الرغم من أن بعض الأعمال الحديثة تظهر التكامل المحتمل بين مختلف الأنظمة الحديثة، إلا أن القليل من الأعمال تحاول التحقيق في هذه المشكلة في تلخيص نص. يشير الباحثون في مجالات أخرى عادة إلى تقنيات إعادة تأهب أو تكديس الاقتراب من هذه المشكلة. في هذا العمل، نسلط الضوء على العديد من القيود للطرق السابقة، مما يحفزنا على تقديم عداء إطاري جديد يوفر وجهة نظر موحدة لتلخيص النص وركز الملخصات. تجريفيا، نقوم بإجراء تقييم شامل يتضمن أنظمة أساسية عشرين وأربعة مجموعات بيانات، وثلاثة سيناريوهات تطبيق مختلفة. إلى جانب نتائج جديدة من أحدث النتائج على DataSet CNN / DailyMail (46.18 Rouge-1)، فإننا نوضح أيضا كيف تتناول طريقةنا المقترحة قيود الطرق التقليدية وفعالية طراز Refactor Sheds الضوء على البصيرة تحسين. يمكن استخدام نظامنا مباشرة من قبل الباحثين الآخرين كأداة خارجية لتحقيق تحسينات أداء إضافية. نحن نفتح المصدر كل الكود وتقديم واجهة مريحة لاستخدامها: https://github.com/yixinl7/refactoring-summarization.
Although some recent works show potential complementarity among different state-of-the-art systems, few works try to investigate this problem in text summarization. Researchers in other areas commonly refer to the techniques of reranking or stacking to approach this problem. In this work, we highlight several limitations of previous methods, which motivates us to present a new framework Refactor that provides a unified view of text summarization and summaries combination. Experimentally, we perform a comprehensive evaluation that involves twenty-two base systems, four datasets, and three different application scenarios. Besides new state-of-the-art results on CNN/DailyMail dataset (46.18 ROUGE-1), we also elaborate on how our proposed method addresses the limitations of the traditional methods and the effectiveness of the Refactor model sheds light on insight for performance improvement. Our system can be directly used by other researchers as an off-the-shelf tool to achieve further performance improvements. We open-source all the code and provide a convenient interface to use it: https://github.com/yixinL7/Refactoring-Summarization.
References used
https://aclanthology.org/
Abstract The scarcity of comprehensive up-to-date studies on evaluation metrics for text summarization and the lack of consensus regarding evaluation protocols continue to inhibit progress. We address the existing shortcomings of summarization evalua
Abstract The quality of a summarization evaluation metric is quantified by calculating the correlation between its scores and human annotations across a large number of summaries. Currently, it is unclear how precise these correlation estimates are,
In this paper, we propose a controllable neural generation framework that can flexibly guide dialogue summarization with personal named entity planning. The conditional sequences are modulated to decide what types of information or what perspective t
A critical point of multi-document summarization (MDS) is to learn the relations among various documents. In this paper, we propose a novel abstractive MDS model, in which we represent multiple documents as a heterogeneous graph, taking semantic node
Summarization systems are ultimately evaluated by human annotators and raters. Usually, annotators and raters do not reflect the demographics of end users, but are recruited through student populations or crowdsourcing platforms with skewed demograph