نقترح معالجة مهام توليد البيانات إلى النص عن طريق الربط مباشرة من جانب شرائح النص من الأزواج المستهدفة من الجيران.على عكس العمل الحديث الذي تقوم بالشروط على الجيران المسترجع ولكن يولد رمزا نصي نصي، من اليسار إلى اليمين، نتعلم السياسة التي تتعامل مباشرة على شرائح النص الجار، عن طريق إدخال أو استبدالها بأجيال مبنية جزئيا.تتطلب التقنيات القياسية للتدريب مثل هذه السياسة عن اشتقاق أوراكل لكل جيل، ونثبت أن العثور على أقصر مثل هذا الاشتقاق يمكن تخفيضها إلى التحليل تحت قواعد محددة معينة خالية من السياق.نجد أن السياسات المستفادة بهذه الطريقة تؤدي على قدم المساواة مع خطوط أساس قوية من حيث التقييم التلقائي والبشري، ولكن السماح لمزيد من الجيل القابل للتفسير والتحكم.
We propose to tackle data-to-text generation tasks by directly splicing together retrieved segments of text from neighbor'' source-target pairs. Unlike recent work that conditions on retrieved neighbors but generates text token-by-token, left-to-right, we learn a policy that directly manipulates segments of neighbor text, by inserting or replacing them in partially constructed generations. Standard techniques for training such a policy require an oracle derivation for each generation, and we prove that finding the shortest such derivation can be reduced to parsing under a particular weighted context-free grammar. We find that policies learned in this way perform on par with strong baselines in terms of automatic and human evaluation, but allow for more interpretable and controllable generation.
References used
https://aclanthology.org/
Abstract Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or variants thereof. These models generate text that is fluent (but often imprecise) and perform quite poorly at selecting appropriate
Due to efficient end-to-end training and fluency in generated texts, several encoder-decoder framework-based models are recently proposed for data-to-text generations. Appropriate encoding of input data is a crucial part of such encoder-decoder model
Recent developments in neural networks have led to the advance in data-to-text generation. However, the lack of ability of neural models to control the structure of generated output can be limiting in certain real-world applications. In this study, w
We present DART, an open domain structured DAta Record to Text generation dataset with over 82k instances (DARTs). Data-to-text annotations can be a costly process, especially when dealing with tables which are the major source of structured data and
While powerful pre-trained language models have improved the fluency of text generation models, semantic adequacy -the ability to generate text that is semantically faithful to the input- remains an unsolved issue. In this paper, we introduce a novel