تلخيص استخراج الجملة تقصر وثيقة عن طريق اختيار الجمل للحصول على ملخص مع الحفاظ على محتوياتها المهمة.ومع ذلك، فإن إنشاء ملخص متماسك وغني مفيد صلب باستخدام ترميز مدرب مسبقا مدربا مسبقا لأنه لا يتم تدريبه صراحة على تمثيل معلومات الجمل في وثيقة.نقترح نموذج تلخيص الاستخراج المستخرج في الأشجار المتداخلة على روبرتا (Neroberta)، حيث تتكون هياكل الأشجار المتداخلة من أشجار النحوية والخطاب في وثيقة معينة.النتائج التجريبية على Dataset CNN / DailyMail أظهرت أن Neroberta تتفوق النماذج الأساسية في Rouge.كما أظهرت نتائج التقييم البشري أن Neroberta تحقق نتائج أفضل بكثير من خطوط الأساس من حيث الاتساق وتصل إلى درجات قابلة للمقارنة إلى النماذج التي من بين الفنون.
Sentence extractive summarization shortens a document by selecting sentences for a summary while preserving its important contents. However, constructing a coherent and informative summary is difficult using a pre-trained BERT-based encoder since it is not explicitly trained for representing the information of sentences in a document. We propose a nested tree-based extractive summarization model on RoBERTa (NeRoBERTa), where nested tree structures consist of syntactic and discourse trees in a given document. Experimental results on the CNN/DailyMail dataset showed that NeRoBERTa outperforms baseline models in ROUGE. Human evaluation results also showed that NeRoBERTa achieves significantly better scores than the baselines in terms of coherence and yields comparable scores to the state-of-the-art models.
References used
https://aclanthology.org/
Pre-trained language models have achieved huge success on a wide range of NLP tasks. However, contextual representations from pre-trained models contain entangled semantic and syntactic information, and therefore cannot be directly used to derive use
Abstract We present the Quantized Transformer (QT), an unsupervised system for extractive opinion summarization. QT is inspired by Vector- Quantized Variational Autoencoders, which we repurpose for popularity-driven summarization. It uses a clusterin
This paper describes our submission for the LongSumm task in SDP 2021. We propose a method for incorporating sentence embeddings produced by deep language models into extractive summarization techniques based on graph centrality in an unsupervised ma
Fine-tuned pre-trained language models (LMs) have achieved enormous success in many natural language processing (NLP) tasks, but they still require excessive labeled data in the fine-tuning stage. We study the problem of fine-tuning pre-trained LMs u
Representation learning for text via pretraining a language model on a large corpus has become a standard starting point for building NLP systems. This approach stands in contrast to autoencoders, also trained on raw text, but with the objective of l