نقدم نهج عصبي نهاية إلى نهج لإنشاء جمل إنجليزية من تمثيلات المعنى الرسمي، وهياكل تمثيل الخطاب (DRSS).نستخدم نموذج تسلسل ثنائي التسلسل BI-LSTM القياسي بدلا من ذلك، والعمل بتمثيل إدخال DRS SNEARIZED، وتقييم رقائق الرقص على مستوى الأحرف ومستوى الكلمات.نحصل على نتائج مشجعة للغاية فيما يتعلق بالمقاييس التلقائية المستندة إلى المرجعية مثل بلو.ولكن نظرا لأن هذا المقاييس يقوم فقط بتقييم مستوى السطح من الإخراج الناتج، فإننا نطور متريا جديدا، وارتفعت، والتي تستهدف الظواهر الدلالية المحددة.نحن نقوم بذلك مع خمسة مجموعات تحدي جيل DRS التركيز على العدد القطبي والقطبية والكمييات المسماة.الهدف من مجموعات التحدي هذه هو تقييم تنظيمي المولد العصبي وتعميم المدخلات غير المرئية.
We present an end-to-end neural approach to generate English sentences from formal meaning representations, Discourse Representation Structures (DRSs). We use a rather standard bi-LSTM sequence-to-sequence model, work with a linearized DRS input representation, and evaluate character-level and word-level decoders. We obtain very encouraging results in terms of reference-based automatic metrics such as BLEU. But because such metrics only evaluate the surface level of generated output, we develop a new metric, ROSE, that targets specific semantic phenomena. We do this with five DRS generation challenge sets focusing on tense, grammatical number, polarity, named entities and quantities. The aim of these challenge sets is to assess the neural generator's systematicity and generalization to unseen inputs.
References used
https://aclanthology.org/
We propose neural models to generate text from formal meaning representations based on Discourse Representation Structures (DRSs). DRSs are document-level representations which encode rich semantic detail pertaining to rhetorical relations, presuppos
Recent work on entity coreference resolution (CR) follows current trends in Deep Learning applied to embeddings and relatively simple task-related features. SOTA models do not make use of hierarchical representations of discourse structure. In this w
To highlight the challenges of achieving representation disentanglement for text domain in an unsupervised setting, in this paper we select a representative set of successfully applied models from the image domain. We evaluate these models on 6 disen
Large language models benefit from training with a large amount of unlabeled text, which gives them increasingly fluent and diverse generation capabilities. However, using these models for text generation that takes into account target attributes, su
Prior studies on text-to-text generation typically assume that the model could figure out what to attend to in the input and what to include in the output via seq2seq learning, with only the parallel training data and no additional guidance. However,