على الرغم من التطورات الأخيرة في تطبيق نماذج اللغة المدربة مسبقا لتوليد نصوص عالية الجودة، فإن توليد مقاطع طويلة تحافظ على تماسك طويل المدى أمر صعب للغاية لهذه النماذج.في هذه الورقة، نقترح Discodvt، وهو محول متغيرات منفصلة على درايته لمعالجة مشكلة عدم الاتساق.يتعلم Discodvt تسلسل متغير منفصل يلخص الهيكل العالمي للنص، ثم ينطبق عليه لتوجيه عملية التوليد في كل خطوة فك التشفير.لزيادة تضمين المعلومات التي تدرك الخطاب في التمثيلات الكامنة المنفصلة، نقدم هدف إضافي لنموذج علاقات الخطاب داخل النص.نقوم بإجراء تجارب واسعة على مجموعة من مجموعات بيانات من القصة المفتوحة وإثبات أن الرموز الكامنة تعلم مراسلات ذات معنى لهياكل الخطاب التي توجه النموذج لتوليد النصوص الطويلة مع التماسك طويل المدى أفضل.
Despite the recent advances in applying pre-trained language models to generate high-quality texts, generating long passages that maintain long-range coherence is yet challenging for these models. In this paper, we propose DiscoDVT, a discourse-aware discrete variational Transformer to tackle the incoherence issue. DiscoDVT learns a discrete variable sequence that summarizes the global structure of the text and then applies it to guide the generation process at each decoding step. To further embed discourse-aware information into the discrete latent representations, we introduce an auxiliary objective to model the discourse relations within the text. We conduct extensive experiments on two open story generation datasets and demonstrate that the latent codes learn meaningful correspondence to the discourse structures that guide the model to generate long texts with better long-range coherence.
References used
https://aclanthology.org/
To capture the semantic graph structure from raw text, most existing summarization approaches are built on GNNs with a pre-trained model. However, these methods suffer from cumbersome procedures and inefficient computations for long-text documents. T
Multilingual T5 pretrains a sequence-to-sequence model on massive monolingual texts, which has shown promising results on many cross-lingual tasks. In this paper, we improve multilingual text-to-text transfer Transformer with translation pairs (mT6).
Recently, a large pre-trained language model called T5 (A Unified Text-to-Text Transfer Transformer) has achieved state-of-the-art performance in many NLP tasks. However, no study has been found using this pre-trained model on Text Simplification. Th
We present CoTexT, a pre-trained, transformer-based encoder-decoder model that learns the representative context between natural language (NL) and programming language (PL). Using self-supervision, CoTexT is pre-trained on large programming language
The recent Text-to-Text Transfer Transformer'' (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 th