تعرض تعقيدات الحسابية والذاكرة التربيعية للمحولات الكبيرة محدودة قابلية توسعها لتلخيص وثيقة طويلة.في هذه الورقة، نقترح هيبوس، وهو اهتمام مفكف مفكف من التشفير مع خطوات وضعية من الدرجة الأولى بفعالية المعلومات البارزة من المصدر.ونحن كذلك إجراء دراسة منهجية للانتباه الذاتية الفعالة الحالية.جنبا إلى جنب مع HEPOS، نحن قادرون على معالجة المزيد من الرموز عشرة أضعاف من النماذج الحالية التي تستخدم الاهتزازات الكاملة.للتقييم، نقدم مجموعة بيانات جديدة، الحكومة، مع وثائق وملخصات أطول بكثير.تشير النتائج إلى أن نماذجنا تنتج درجات Rouge أعلى بكثير من المقارنات التنافسية، بما في ذلك النتائج الجديدة من أحدث النتائج على PubMed.يوضح التقييم البشري أيضا أن نماذجنا تولد ملخصات أكثر إعلانية مع أخطاء أقل غير مانعة.
The quadratic computational and memory complexities of large Transformers have limited their scalability for long document summarization. In this paper, we propose Hepos, a novel efficient encoder-decoder attention with head-wise positional strides to effectively pinpoint salient information from the source. We further conduct a systematic study of existing efficient self-attentions. Combined with Hepos, we are able to process ten times more tokens than existing models that use full attentions. For evaluation, we present a new dataset, GovReport, with significantly longer documents and summaries. Results show that our models produce significantly higher ROUGE scores than competitive comparisons, including new state-of-the-art results on PubMed. Human evaluation also shows that our models generate more informative summaries with fewer unfaithful errors.
References used
https://aclanthology.org/
A crucial difference between single- and multi-document summarization is how salient content manifests itself in the document(s). While such content may appear at the beginning of a single document, essential information is frequently reiterated in a
This paper presents an unsupervised extractive approach to summarize scientific long documents based on the Information Bottleneck principle. Inspired by previous work which uses the Information Bottleneck principle for sentence compression, we exten
Multi-label document classification (MLDC) problems can be challenging, especially for long documents with a large label set and a long-tail distribution over labels. In this paper, we present an effective convolutional attention network for the MLDC
Neural-based summarization models suffer from the length limitation of text encoder. Long documents have to been truncated before they are sent to the model, which results in huge loss of summary-relevant contents. To address this issue, we propose t
To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model. However, these methods suffer from cumbersome procedures and inefficient computations for long-text documents. T