أظهرت النماذج العصبية الكبيرة المدربة مسبقا تقدما ملحوظا في جيل النص. في هذه الورقة، نقترح إنشاء نص مكيف على البيانات المهيكلة (الجدول) وبادئة (النص المكتوب) من خلال الاستفادة من النماذج المدربة مسبقا. نقدم بيانات جديدة إلى نص البيانات، جدول مع نص مكتوب (TWT)، عن طريق إعادة تعيين مجموعات بيانات حالية: Totto و Tabract. يحتوي TWT على تصريحات واقعية ومنطقية مخلصة للبيانات المنظمة، تهدف إلى العمل كمعيار مفيد للجيل المسيطر عليه. بالمقارنة مع إعدادات المهام الحالية إلى النص، يكون TWT أكثر بديهية، يتحكم البادئة (عادة ما يوفرها المستخدم) موضوع النص الذي تم إنشاؤه. عادة ما يتم إخراج الأساليب الحالية النص الهلوسة غير المؤمنين على TWT. لذلك، نقوم بتصميم نهج رواية مع رؤية الاهتمام على أساس الجدول وآلية النسخ على الطاولة. تظهر النتائج التجريبية أن نهجنا يتفوق على الأساليب الحديثة بموجب مقاييس التقييم التلقائي والإنساني.
Large pre-trained neural models have recently shown remarkable progress in text generation. In this paper, we propose to generate text conditioned on the structured data (table) and a prefix (the written text) by leveraging the pre-trained models. We present a new data-to-text dataset, Table with Written Text (TWT), by repurposing two existing datasets: ToTTo and TabFact. TWT contains both factual and logical statements that are faithful to the structured data, aiming to serve as a useful benchmark for controlled text generation. Compared with existing data-to-text task settings, TWT is more intuitive, the prefix (usually provided by the user) controls the topic of the generated text. Existing methods usually output hallucinated text that is not faithful on TWT. Therefore, we design a novel approach with table-aware attention visibility and copy mechanism over the table. Experimental results show that our approach outperforms state-of-the-art methods under both automatic and human evaluation metrics.
References used
https://aclanthology.org/
Recent developments in neural networks have led to the advance in data-to-text generation. However, the lack of ability of neural models to control the structure of generated output can be limiting in certain real-world applications. In this study, w
Neural table-to-text generation models have achieved remarkable progress on an array of tasks. However, due to the data-hungry nature of neural models, their performances strongly rely on large-scale training examples, limiting their applicability in
Abstract Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or variants thereof. These models generate text that is fluent (but often imprecise) and perform quite poorly at selecting appropriate
We propose Future Discriminators for Generation (FUDGE), a flexible and modular method for controlled text generation. Given a pre-existing model G for generating text from a distribution of interest, FUDGE enables conditioning on a desired attribute
Text Simplification improves the readability of sentences through several rewriting transformations, such as lexical paraphrasing, deletion, and splitting. Current simplification systems are predominantly sequence-to-sequence models that are trained