توفر شرائح العرض التقديمية الناتجة عن أوراق البحث الأصلية نموذجا فعالا لتقديم ابتكارات بحثية.توليد شرائح العرض يدويا هي كثيفة العمالة.نقترح طريقة لإنشاء الشرائح تلقائيا للمقالات العلمية المستندة إلى كائن من 5000 من أزواج الشريحة الورقية التي تم تجميعها من مواقع ويب إجراءات المؤتمرات.تستند وحدة وضع العلامات الخاصة بالحكم من طريقتنا إلى SummarUnner، نموذج التسلسل العصبي لتلخيص الاستخراج.بدلا من الترتيب الجمل بناء على أوجه التشابه الدلالي في المستند بأكمله، تقيس خوارزميةنا أهمية وحد الجمل عن طريق الجمع بين الميزات الدلالية والجنيات في إطار الجملة.تتفوق طريقةنا على العديد من الطرق الأساسية بما في ذلك SummarUnner بهامش مهم من حيث درجة Rouge.
Presentation slides generated from original research papers provide an efficient form to present research innovations. Manually generating presentation slides is labor-intensive. We propose a method to automatically generates slides for scientific articles based on a corpus of 5000 paper-slide pairs compiled from conference proceedings websites. The sentence labeling module of our method is based on SummaRuNNer, a neural sequence model for extractive summarization. Instead of ranking sentences based on semantic similarities in the whole document, our algorithm measures the importance and novelty of sentences by combining semantic and lexical features within a sentence window. Our method outperforms several baseline methods including SummaRuNNer by a significant margin in terms of ROUGE score.
References used
https://aclanthology.org/
In this work, we empirically compare span extraction methods for the task of semantic role labeling (SRL). While recent progress incorporating pre-trained contextualized representations into neural encoders has greatly improved SRL F1 performance on
While FrameNet is widely regarded as a rich resource of semantics in natural language processing, a major criticism concerns its lack of coverage and the relative paucity of its labeled data compared to other commonly used lexical resources such as P
Current abstractive summarization systems outperform their extractive counterparts, but their widespread adoption is inhibited by the inherent lack of interpretability. Extractive summarization systems, though interpretable, suffer from redundancy an
Although recent developments in neural architectures and pre-trained representations have greatly increased state-of-the-art model performance on fully-supervised semantic role labeling (SRL), the task remains challenging for languages where supervis
Extractive summarization has been the mainstay of automatic summarization for decades. Despite all the progress, extractive summarizers still suffer from shortcomings including coreference issues arising from extracting sentences away from their orig