ترغب بنشر مسار تعليمي؟ اضغط هنا

Dynamic Sliding Window for Meeting Summarization

77   0   0.0 ( 0 )
 نشر من قبل Zhengyuan Liu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Recently abstractive spoken language summarization raises emerging research interest, and neural sequence-to-sequence approaches have brought significant performance improvement. However, summarizing long meeting transcripts remains challenging. Due to the large length of source contents and targeted summaries, neural models are prone to be distracted on the context, and produce summaries with degraded quality. Moreover, pre-trained language models with input length limitations cannot be readily applied to long sequences. In this work, we first analyze the linguistic characteristics of meeting transcripts on a representative corpus, and find that the sentences comprising the summary correlate with the meeting agenda. Based on this observation, we propose a dynamic sliding window strategy for meeting summarization. Experimental results show that performance benefit from the proposed method, and outputs obtain higher factual consistency than the base model.

قيم البحث

اقرأ أيضاً

With the abundance of automatic meeting transcripts, meeting summarization is of great interest to both participants and other parties. Traditional methods of summarizing meetings depend on complex multi-step pipelines that make joint optimization in tractable. Meanwhile, there are a handful of deep neural models for text summarization and dialogue systems. However, the semantic structure and styles of meeting transcripts are quite different from articles and conversations. In this paper, we propose a novel abstractive summary network that adapts to the meeting scenario. We design a hierarchical structure to accommodate long meeting transcripts and a role vector to depict the difference among speakers. Furthermore, due to the inadequacy of meeting summary data, we pretrain the model on large-scale news summary data. Empirical results show that our model outperforms previous approaches in both automatic metrics and human evaluation. For example, on ICSI dataset, the ROUGE-1 score increases from 34.66% to 46.28%.
Meeting summarization is a challenging task due to its dynamic interaction nature among multiple speakers and lack of sufficient training data. Existing methods view the meeting as a linear sequence of utterances while ignoring the diverse relations between each utterance. Besides, the limited labeled data further hinders the ability of data-hungry neural models. In this paper, we try to mitigate the above challenges by introducing dialogue-discourse relations. First, we present a Dialogue Discourse-Dware Meeting Summarizer (DDAMS) to explicitly model the interaction between utterances in a meeting by modeling different discourse relations. The core module is a relational graph encoder, where the utterances and discourse relations are modeled in a graph interaction manner. Moreover, we devise a Dialogue Discourse-Aware Data Augmentation (DDADA) strategy to construct a pseudo-summarization corpus from existing input meetings, which is 20 times larger than the original dataset and can be used to pretrain DDAMS. Experimental results on AMI and ICSI meeting datasets show that our full system can achieve SOTA performance. Our codes will be available at: https://github.com/xcfcode/DDAMS.
177 - Ming Zhong , Da Yin , Tao Yu 2021
Meetings are a key component of human collaboration. As increasing numbers of meetings are recorded and transcribed, meeting summaries have become essential to remind those who may or may not have attended the meetings about the key decisions made an d the tasks to be completed. However, it is hard to create a single short summary that covers all the content of a long meeting involving multiple people and topics. In order to satisfy the needs of different types of users, we define a new query-based multi-domain meeting summarization task, where models have to select and summarize relevant spans of meetings in response to a query, and we introduce QMSum, a new benchmark for this task. QMSum consists of 1,808 query-summary pairs over 232 meetings in multiple domains. Besides, we investigate a locate-then-summarize method and evaluate a set of strong summarization baselines on the task. Experimental results and manual analysis reveal that QMSum presents significant challenges in long meeting summarization for future research. Dataset is available at url{https://github.com/Yale-LILY/QMSum}.
Transcripts generated by automatic speech recognition (ASR) systems for spoken documents lack structural annotations such as paragraphs, significantly reducing their readability. Automatically predicting paragraph segmentation for spoken documents ma y both improve readability and downstream NLP performance such as summarization and machine reading comprehension. We propose a sequence model with self-adaptive sliding window for accurate and efficient paragraph segmentation. We also propose an approach to exploit phonetic information, which significantly improves robustness of spoken document segmentation to ASR errors. Evaluations are conducted on the English Wiki-727K document segmentation benchmark, a Chinese Wikipedia-based document segmentation dataset we created, and an in-house Chinese spoken document dataset. Our proposed model outperforms the state-of-the-art (SOTA) model based on the same BERT-Base, increasing segmentation F1 on the English benchmark by 4.2 points and on Chinese datasets by 4.3-10.1 points, while reducing inference time to less than 1/6 of inference time of the current SOTA.
308 - Dhruv Rohatgi 2018
We extend the multi-pass streaming model to sliding window problems, and address the problem of computing order statistics on fixed-size sliding windows, in the multi-pass streaming model as well as the closely related communication complexity model. In the $2$-pass streaming model, we show that on input of length $N$ with values in range $[0,R]$ and a window of length $K$, sliding window minimums can be computed in $widetilde{O}(sqrt{N})$. We show that this is nearly optimal (for any constant number of passes) when $R geq K$, but can be improved when $R = o(K)$ to $widetilde{O}(sqrt{NR/K})$. Furthermore, we show that there is an $(l+1)$-pass streaming algorithm which computes $l^text{th}$-smallest elements in $widetilde{O}(l^{3/2} sqrt{N})$ space. In the communication complexity model, we describe a simple $widetilde{O}(pN^{1/p})$ algorithm to compute minimums in $p$ rounds of communication for odd $p$, and a more involved algorithm which computes the $l^text{th}$-smallest elements in $widetilde{O}(pl^2 N^{1/(p-2l-1)})$ space. Finally, we prove that the majority statistic on boolean streams cannot be computed in sublinear space, implying that $l^text{th}$-smallest elements cannot be computed in space both sublinear in $N$ and independent of $l$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا