تقدم هذه الورقة MediaSum، مجموعة بيانات مقابلة الوسائط على نطاق واسع تتكون من نصوص 463.6 كيلو بايت مع ملخصات إبتياج.لإنشاء هذه البيانات، نجمع مخالفات المقابلة من NPR و CNN وتوظيف نظرة عامة وأوصاف موضوع كملخصات.مقارنة مع الشركة العامة القائمة للحصول على تلخيص الحوار، فإن DataSet لدينا هي أمر من حيث الحجم ويحتوي على محادثات متعددة الأحزاب المعقدة من مجالات متعددة.نقوم بإجراء تحليل إحصائي لإظهار التحيز الموضعي الفريد المعروض في نصوص المقابلات التلفزيونية والإذاعية.نظهر أيضا أن MediaSum يمكن استخدامه في تعلم التعلم لتحسين أداء نموذج على مهام تلخيص حوار أخرى.
This paper introduces MediaSum, a large-scale media interview dataset consisting of 463.6K transcripts with abstractive summaries. To create this dataset, we collect interview transcripts from NPR and CNN and employ the overview and topic descriptions as summaries. Compared with existing public corpora for dialogue summarization, our dataset is an order of magnitude larger and contains complex multi-party conversations from multiple domains. We conduct statistical analysis to demonstrate the unique positional bias exhibited in the transcripts of televised and radioed interviews. We also show that MediaSum can be used in transfer learning to improve a model's performance on other dialogue summarization tasks.
References used
https://aclanthology.org/
Recent development in NLP shows a strong trend towards refining pre-trained models with a domain-specific dataset. This is especially the case for response generation where emotion plays an important role. However, existing empathetic datasets remain
Cross-document event coreference resolution is a foundational task for NLP applications involving multi-text processing. However, existing corpora for this task are scarce and relatively small, while annotating only modest-size clusters of documents
Multimodal summarization becomes increasingly significant as it is the basis for question answering, Web search, and many other downstream tasks. However, its learning materials have been lacking a holistic organization by integrating resources from
Abstract Aspect-based summarization is the task of generating focused summaries based on specific points of interest. Such summaries aid efficient analysis of text, such as quickly understanding reviews or opinions from different angles. However, due
This work introduces Itihasa, a large-scale translation dataset containing 93,000 pairs of Sanskrit shlokas and their English translations. The shlokas are extracted from two Indian epics viz., The Ramayana and The Mahabharata. We first describe the