في هذه الورقة، نركز على تحسين جودة الملخص الذي تم إنشاؤه بواسطة أنظمة تلخيص الحوار المبشور العصبي.على الرغم من أن طرازات اللغة المدربة مسبقا تولد نتائج رائعة واعدة، إلا أنها لا تزال تحديا لتلخيص محادثة المشاركين المتعددين منذ أن تتضمن الملخص وصفا للوضع العام وإجراءات كل مكبر صوت.تقترح هذه الورقة استراتيجيات ذات إشراف ذاتي لتصحيح ما بعد تركز على المتكلم في تلخيص حوار المبادرة.على وجه التحديد، تميز نموذجنا أولا أي نوع من تصحيح المتكلم مطلوب في مشروع ملخص ثم يولد ملخص منقح وفقا للنوع المطلوب.تظهر النتائج التجريبية أن أسلوبنا المقترح بتصحيح مشاريع الملخصات بشكل كاف، ويتم تحسين الملخصات المنقحة بشكل كبير في كل من التقييمات الكمية والنوعية.
In this paper, we focus on improving the quality of the summary generated by neural abstractive dialogue summarization systems. Even though pre-trained language models generate well-constructed and promising results, it is still challenging to summarize the conversation of multiple participants since the summary should include a description of the overall situation and the actions of each speaker. This paper proposes self-supervised strategies for speaker-focused post-correction in abstractive dialogue summarization. Specifically, our model first discriminates which type of speaker correction is required in a draft summary and then generates a revised summary according to the required type. Experimental results show that our proposed method adequately corrects the draft summaries, and the revised summaries are significantly improved in both quantitative and qualitative evaluations.
References used
https://aclanthology.org/
Dialogue Act (DA) classification is the task of classifying utterances with respect to the function they serve in a dialogue. Existing approaches to DA classification model utterances without incorporating the turn changes among speakers throughout t
In recent years, speech synthesis system can generate speech with high speech quality. However, multi-speaker text-to-speech (TTS) system still require large amount of speech data for each target speaker. In this study, we would like to construct a m
Abstractive summarization quality had large improvements since recent language pretraining techniques. However, currently there is a lack of datasets for the growing needs of conversation summarization applications. Thus we collected ForumSum, a dive
Automatic abstractive summaries are found to often distort or fabricate facts in the article. This inconsistency between summary and original text has seriously impacted its applicability. We propose a fact-aware summarization model FASum to extract
Multi-party dialogue machine reading comprehension (MRC) brings tremendous challenge since it involves multiple speakers at one dialogue, resulting in intricate speaker information flows and noisy dialogue contexts. To alleviate such difficulties, pr