تقدم هذه الورقة نهجا استخراج غير مخطئ لتلخيص المستندات الطويلة العلمية بناء على مبدأ اختناق المعلومات.مستوحاة من العمل السابق الذي يستخدم مبدأ اختناق المعلومات لضغط الجملة، فإننا نقدمها لتلخيص مستوى الوثيقة مع خطوتين منفصلين.في الخطوة الأولى، نستخدم إشارة (إشارات) كاستعلامات لاسترداد المحتوى الرئيسي من المستند المصدر.بعد ذلك، يقوم نموذج لغة مدرب مسبقا بإجراء المزيد من الجملة والتحرير لإرجاع الملخصات المستخرجة النهائية.الأهم من ذلك، يمكن امتدت عملنا بمرونة إلى إطار متعدد المشاهدات من قبل إشارات مختلفة.التقييم التلقائي على ثلاث مجموعات بيانات وثيقة علمية تتحقق من فعالية الإطار المقترح.يشير التقييم البشري الإضافي إلى أن الملخصات المستخرجة تغطي المزيد من جوانب المحتوى أكثر من النظم السابقة.
This paper presents an unsupervised extractive approach to summarize scientific long documents based on the Information Bottleneck principle. Inspired by previous work which uses the Information Bottleneck principle for sentence compression, we extend it to document level summarization with two separate steps. In the first step, we use signal(s) as queries to retrieve the key content from the source document. Then, a pre-trained language model conducts further sentence search and edit to return the final extracted summaries. Importantly, our work can be flexibly extended to a multi-view framework by different signals. Automatic evaluation on three scientific document datasets verifies the effectiveness of the proposed framework. The further human evaluation suggests that the extracted summaries cover more content aspects than previous systems.
References used
https://aclanthology.org/
Bidirectional Encoder Representations from Transformers (BERT) has achieved state-of-the-art performances on several text classification tasks, such as GLUE and sentiment analysis. Recent work in the legal domain started to use BERT on tasks, such as
The quadratic computational and memory complexities of large Transformers have limited their scalability for long document summarization. In this paper, we propose Hepos, a novel efficient encoder-decoder attention with head-wise positional strides t
A crucial difference between single- and multi-document summarization is how salient content manifests itself in the document(s). While such content may appear at the beginning of a single document, essential information is frequently reiterated in a
This paper proposes a new abstractive document summarization model, hierarchical BART (Hie-BART), which captures hierarchical structures of a document (i.e., sentence-word structures) in the BART model. Although the existing BART model has achieved a
With the increase in the number of published academic papers, growing expectations have been placed on research related to supporting the writing process of scientific papers. Recently, research has been conducted on various tasks such as citation wo