تبنت النهج الحديثة التجريدية لجيل النص إلى النص بنية فك التشفير الناجحة للغاية أو المتغيرات منها.تولد هذه النماذج نصا يجيد (ولكن في كثير من الأحيان غير دقيقة) وإجراء سيئة للغاية عند تحديد المحتوى المناسب وطلبه بشكل متماسك.للتغلب على بعض هذه القضايا، نقترح نموذجا عصبا بمرحلة تخطيط ماكرو تذكرنا مرحلة جيل تذكرنا بالطرق التقليدية التي تعتنق وحدات منفصلة للتخطيط وإعمال السطح.تمثل خطط الماكرو تنظيما رفيع المستوى للمحتوى الهام مثل الكيانات والأحداث وتفاعلاتها؛يتم تعلمهم من البيانات وإتاحة كمدخلات للمولد.تبين تجارب واسعة على معايير بيانات إلى نصية (Rotowire و MLB) أن نهجنا يتفوق على خطوط أساس تنافسية من حيث التقييم التلقائي والبشري.
Abstract Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or variants thereof. These models generate text that is fluent (but often imprecise) and perform quite poorly at selecting appropriate content and ordering it coherently. To overcome some of these issues, we propose a neural model with a macro planning stage followed by a generation stage reminiscent of traditional methods which embrace separate modules for planning and surface realization. Macro plans represent high level organization of important content such as entities, events, and their interactions; they are learned from data and given as input to the generator. Extensive experiments on two data-to-text benchmarks (RotoWire and MLB) show that our approach outperforms competitive baselines in terms of automatic and human evaluation.
References used
https://aclanthology.org/
Recent developments in neural networks have led to the advance in data-to-text generation. However, the lack of ability of neural models to control the structure of generated output can be limiting in certain real-world applications. In this study, w
We propose to tackle data-to-text generation tasks by directly splicing together retrieved segments of text from neighbor'' source-target pairs. Unlike recent work that conditions on retrieved neighbors but generates text token-by-token, left-to-righ
Due to efficient end-to-end training and fluency in generated texts, several encoder-decoder framework-based models are recently proposed for data-to-text generations. Appropriate encoding of input data is a crucial part of such encoder-decoder model
Data-to-text generation systems are trained on large datasets, such as WebNLG, Ro-toWire, E2E or DART. Beyond traditional token-overlap evaluation metrics (BLEU or METEOR), a key concern faced by recent generators is to control the factuality of the
We present DART, an open domain structured DAta Record to Text generation dataset with over 82k instances (DARTs). Data-to-text annotations can be a costly process, especially when dealing with tables which are the major source of structured data and