أدت التطورات الأخيرة في الشبكات العصبية إلى التقدم في توليد البيانات إلى النص.ومع ذلك، فإن الافتقار إلى قدرة النماذج العصبية للسيطرة على هيكل الإخراج الذي تم إنشاؤه يمكن أن يحد في بعض تطبيقات العالم الحقيقي.في هذه الدراسة، نقترح إطارا جديدا لخطة الرواية (Plangen) لتحسين قابلية تحكم نماذج البيانات النصية العصبية.يتم إجراء تجارب واسعة من التجارب والتحليلات على مجموعة من مجموعات البيانات القياسية، Totto و Webnlg.تظهر النتائج أن نموذجنا قادر على التحكم في كل من الجملة داخل الجملة وبنية الجملة بين الإخراج الناتج.علاوة على ذلك، تظهر المقارنات التجريبية ضد الأساليب السابقة من الأساليب السابقة أن نموذجنا يحسن جودة التوليد وكذلك تنوع الإخراج عند الحكم على التقييمات البشرية والآلية.
Recent developments in neural networks have led to the advance in data-to-text generation. However, the lack of ability of neural models to control the structure of generated output can be limiting in certain real-world applications. In this study, we propose a novel Plan-then-Generate (PlanGen) framework to improve the controllability of neural data-to-text models. Extensive experiments and analyses are conducted on two benchmark datasets, ToTTo and WebNLG. The results show that our model is able to control both the intra-sentence and inter-sentence structure of the generated output. Furthermore, empirical comparisons against previous state-of-the-art methods show that our model improves the generation quality as well as the output diversity as judged by human and automatic evaluations.
References used
https://aclanthology.org/
Abstract Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or variants thereof. These models generate text that is fluent (but often imprecise) and perform quite poorly at selecting appropriate
Data-to-text (D2T) generation in the biomedical domain is a promising - yet mostly unexplored - field of research. Here, we apply neural models for D2T generation to a real-world dataset consisting of package leaflets of European medicines. We show t
Large pre-trained neural models have recently shown remarkable progress in text generation. In this paper, we propose to generate text conditioned on the structured data (table) and a prefix (the written text) by leveraging the pre-trained models. We
We present DART, an open domain structured DAta Record to Text generation dataset with over 82k instances (DARTs). Data-to-text annotations can be a costly process, especially when dealing with tables which are the major source of structured data and
We propose to tackle data-to-text generation tasks by directly splicing together retrieved segments of text from neighbor'' source-target pairs. Unlike recent work that conditions on retrieved neighbors but generates text token-by-token, left-to-righ