تستكشف هذه الورقة تأثير استخدام التعلم المتعدد التواجد لتلخيص الجماع في سياق كورسا التدريب الصغيرة.على وجه الخصوص، نحن ندمج أربع مهام مختلفة (تلخيص استخراجي، ونمذجة اللغة، والكشف عن المفهوم، والكشف عن الصياغة على حد سواء بشكل فردي ومزيج، بهدف تعزيز المهمة المستهدفة المتمثلة في تلخيص الجماع عبر التعلم المتعدد.نظرا لأنه بالنسبة للعديد من مجموعات المهام، فإن نموذج مدرب في إعداد متعدد الأيتاكف يتفوق على نموذج مدرب فقط في تلخيص الجماع، مع عدم تقديم بيانات تلخيص إضافية.بالإضافة إلى ذلك، نقوم بعمل بحث شامل والعثور على أن بعض المهام (E.G. الكشف عن الصياغة) تستفيد باستمرار تلخيص الجماعي، ليس فقط عند الجمع مع المهام الأخرى ولكن أيضا عند استخدام بهيئات مختلفة وتدريب كورسا.
This paper explores the effect of using multitask learning for abstractive summarization in the context of small training corpora. In particular, we incorporate four different tasks (extractive summarization, language modeling, concept detection, and paraphrase detection) both individually and in combination, with the goal of enhancing the target task of abstractive summarization via multitask learning. We show that for many task combinations, a model trained in a multitask setting outperforms a model trained only for abstractive summarization, with no additional summarization data introduced. Additionally, we do a comprehensive search and find that certain tasks (e.g. paraphrase detection) consistently benefit abstractive summarization, not only when combined with other tasks but also when using different architectures and training corpora.
References used
https://aclanthology.org/
State-of-the-art abstractive summarization models generally rely on extensive labeled data, which lowers their generalization ability on domains where such data are not available. In this paper, we present a study of domain adaptation for the abstrac
We study generating abstractive summaries that are faithful and factually consistent with the given articles. A novel contrastive learning formulation is presented, which leverages both reference summaries, as positive training data, and automaticall
Low-resource Relation Extraction (LRE) aims to extract relation facts from limited labeled corpora when human annotation is scarce. Existing works either utilize self-training scheme to generate pseudo labels that will cause the gradual drift problem
We translate a closed text that is known in advance and available in many languages into a new and severely low resource language. Most human translation efforts adopt a portionbased approach to translate consecutive pages/chapters in order, which ma
A bigger is better'' explosion in the number of parameters in deep neural networks has made it increasingly challenging to make state-of-the-art networks accessible in compute-restricted environments. Compression techniques have taken on renewed impo