لقد شهدت النماذج الكبيرة الاحترادية نجاحا هائلا في مهام تلخيص الاستخراجية.في هذا العمل، يمكننا التحقيق في تأثير الاحتجاج على نظام تلخيص استخراج استخراج بيرت للوثائق العلمية.نحن نستمد تحسينات كبيرة من الأداء باستخدام خطوة محاكاة وسيطة تستفيد من مجموعات بيانات التلخيص الحالية والإبلاغ عن نتائج أحدث النتائج في مجموعة بيانات التلخيص العلمية التي تم إصدارها مؤخرا، SCITLDR.نقوم بتحليل خطوة محاكاة الوسيطة بشكل منهجي عن طريق تغيير حجم ومجال الإصلاح، وتغيير طول تسلسل الإدخال في المهمة المستهدفة والمهام المستهدفة المتغيرة.نحن نحقق أيضا كيف يتفاعل الوسيطة التي تتفاعل مع تضمين الكلمات السياقية المدربة على المجالات المختلفة.
Large pretrained models have seen enormous success in extractive summarization tasks. In this work, we investigate the influence of pretraining on a BERT-based extractive summarization system for scientific documents. We derive significant performance improvements using an intermediate pretraining step that leverages existing summarization datasets and report state-of-the-art results on a recently released scientific summarization dataset, SciTLDR. We systematically analyze the intermediate pretraining step by varying the size and domain of the pretraining corpus, changing the length of the input sequence in the target task and varying target tasks. We also investigate how intermediate pretraining interacts with contextualized word embeddings trained on different domains.
References used
https://aclanthology.org/
Automatically extracting keyphrases from scholarly documents leads to a valuable concise representation that humans can understand and machines can process for tasks, such as information retrieval, article clustering and article classification. This
Extracting the most important part of legislation documents has great business value because the texts are usually very long and hard to understand. The aim of this article is to evaluate different algorithms for text summarization on EU legislation
Extractive text summarization aims at extracting the most representative sentences from a given document as its summary. To extract a good summary from a long text document, sentence embedding plays an important role. Recent studies have leveraged gr
Automatic summarization aims to extract important information from large amounts of textual data in order to create a shorter version of the original texts while preserving its information. Training traditional extractive summarization models relies
To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model. However, these methods suffer from cumbersome procedures and inefficient computations for long-text documents. T