نقدم نهجا جديدا محاطا جديدا موجه للنمذجة اللغوية متعددة الوثائق، حيث تتضمن أفكيرا رئيسيتين في النمذجة اللغوية المشنقة بالهدف الإشراف على الذات.أولا، بدلا من النظر في الوثائق في العزلة، نحن نتفق مع مجموعات من المستندات المتعددة المتعددة، تشجيع النموذج على تعلم العلاقات عبر الوثائق.ثانيا، نحسن أكثر من المحولات الطويلة المدى حديثا عن طريق إدخال اهتمام عالمي ديناميكي يتمكن من الوصول إلى الإدخال بأكمله للتنبؤ بالرموز الملثم.نقوم بإصدار CDLM (نموذج اللغة عبر المستندات)، وهو نموذج لغة عام جديد لإعداد متعدد الوثائق يمكن تطبيقه بسهولة على مهام المصب.يوضح تحليلنا الواسع أن كلا الأفكار ضرورية لنجاح CDLM، والعمل في تآزر لتعيين نتائج جديدة من الفنون الجديدة لعدة مهام متعددة النص.
We introduce a new pretraining approach geared for multi-document language modeling, incorporating two key ideas into the masked language modeling self-supervised objective. First, instead of considering documents in isolation, we pretrain over sets of multiple related documents, encouraging the model to learn cross-document relationships. Second, we improve over recent long-range transformers by introducing dynamic global attention that has access to the entire input to predict masked tokens. We release CDLM (Cross-Document Language Model), a new general language model for multi-document setting that can be easily applied to downstream tasks. Our extensive analysis shows that both ideas are essential for the success of CDLM, and work in synergy to set new state-of-the-art results for several multi-text tasks.
References used
https://aclanthology.org/
Dense retrieval has shown great success for passage ranking in English. However, its effectiveness for non-English languages remains unexplored due to limitation in training resources. In this work, we explore different transfer techniques for docume
Understanding voluminous historical records provides clues on the past in various aspects, such as social and political issues and even natural science facts. However, it is generally difficult to fully utilize the historical records, since most of t
This paper presents the first study on using large-scale pre-trained language models for automated generation of an event-level temporal graph for a document. Despite the huge success of neural pre-training methods in NLP tasks, its potential for tem
A crucial difference between single- and multi-document summarization is how salient content manifests itself in the document(s). While such content may appear at the beginning of a single document, essential information is frequently reiterated in a
Language models pretrained on vast corpora of unstructured text using self-supervised learning framework are used in numerous natural language understanding and generation tasks. Many studies show that language acquisition in humans follows a rather