نقترح النماذج العصبية لتوليد نص من تمثيلات معناية رسمية بناء على هياكل تمثيل الخطاب (DRSS).DRSS هي تمثيلات على مستوى المستند والتي تشفص بالتفاصيل الدلالية الغنية المتعلقة بالعلاقات الخطابية، والافتراض، والتعايش التعاوني داخل وعبر الجمل.نقوم بإضفاء الطابع الرسمي على مهمة الجيل العصبي DRS إلى النص وتوفير حلول النمذجة لمشاكل طلب الشرط وتسمية التسمية المتغيرة التي تجعل الجيل من DRSS غير تافهة.يعتمد مولدنا على نموذج Treelstm الرواية القادرة على تمثيل هياكل DRS بدقة وهو مناسب بشكل عام للأشجار ذات فروع واسعة.نحقق أداء تنافسي (59.48 بلو) على معيار GMB ضد العديد من خطوط الأساس القوية.
We propose neural models to generate text from formal meaning representations based on Discourse Representation Structures (DRSs). DRSs are document-level representations which encode rich semantic detail pertaining to rhetorical relations, presupposition, and co-reference within and across sentences. We formalize the task of neural DRS-to-text generation and provide modeling solutions for the problems of condition ordering and variable naming which render generation from DRSs non-trivial. Our generator relies on a novel sibling treeLSTM model which is able to accurately represent DRS structures and is more generally suited to trees with wide branches. We achieve competitive performance (59.48 BLEU) on the GMB benchmark against several strong baselines.
References used
https://aclanthology.org/
We present an end-to-end neural approach to generate English sentences from formal meaning representations, Discourse Representation Structures (DRSs). We use a rather standard bi-LSTM sequence-to-sequence model, work with a linearized DRS input repr
Large language models benefit from training with a large amount of unlabeled text, which gives them increasingly fluent and diverse generation capabilities. However, using these models for text generation that takes into account target attributes, su
Recent work on entity coreference resolution (CR) follows current trends in Deep Learning applied to embeddings and relatively simple task-related features. SOTA models do not make use of hierarchical representations of discourse structure. In this w
To highlight the challenges of achieving representation disentanglement for text domain in an unsupervised setting, in this paper we select a representative set of successfully applied models from the image domain. We evaluate these models on 6 disen
Providing pretrained language models with simple task descriptions in natural language enables them to solve some tasks in a fully unsupervised fashion. Moreover, when combined with regular learning from examples, this idea yields impressive few-shot