تعاني نماذج تلخيص مقرها العصبي من الحد الأقصى للتوافق في تشفير النص.يجب اقتطاع المستندات الطويلة قبل إرسالها إلى النموذج، مما يؤدي إلى فقدان هائل للمحتويات الملخص ذات الصلة.لمعالجة هذه المشكلة، نقترح شبكة المحدد المنزلق بالذاكرة الديناميكية لعلمة الاستخراجية للمستندات الطويلة النموذجية، والتي توظف نافذة انزلاقية لاستخراج قطاع الجمل الموجز حسب القطاع.علاوة على ذلك، نعتمد آلية الذاكرة للحفاظ على معلومات التاريخ وتحديثها بشكل حيوي، مما يسمح للتدفق الدلالي عبر نوافذ مختلفة.النتائج التجريبية على مجموعة بيانات واسعة النطاق تتكون من أوراق علمية تثبت أن طرازنا تتفوق بشكل كبير على النماذج السابقة للحالة السابقة.علاوة على ذلك، نقوم بإجراء تحقيقات نوعية وكمية حول كيفية عملنا النموذجي وأين يأتي مكسب الأداء.
Neural-based summarization models suffer from the length limitation of text encoder. Long documents have to been truncated before they are sent to the model, which results in huge loss of summary-relevant contents. To address this issue, we propose the sliding selector network with dynamic memory for extractive summarization of long-form documents, which employs a sliding window to extract summary sentences segment by segment. Moreover, we adopt memory mechanism to preserve and update the history information dynamically, allowing the semantic flow across different windows. Experimental results on two large-scale datasets that consist of scientific papers demonstrate that our model substantially outperforms previous state-of-the-art models. Besides, we perform qualitative and quantitative investigations on how our model works and where the performance gain comes from.
References used
https://aclanthology.org/
To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model. However, these methods suffer from cumbersome procedures and inefficient computations for long-text documents. T
Extractive text summarization aims at extracting the most representative sentences from a given document as its summary. To extract a good summary from a long text document, sentence embedding plays an important role. Recent studies have leveraged gr
Automatic summarization aims to extract important information from large amounts of textual data in order to create a shorter version of the original texts while preserving its information. Training traditional extractive summarization models relies
The quadratic computational and memory complexities of large Transformers have limited their scalability for long document summarization. In this paper, we propose Hepos, a novel efficient encoder-decoder attention with head-wise positional strides t
A crucial difference between single- and multi-document summarization is how salient content manifests itself in the document(s). While such content may appear at the beginning of a single document, essential information is frequently reiterated in a