حققت نماذج جيل الجدول إلى النص العصبي تقدما ملحوظا في صفيف المهام.ومع ذلك، نظرا لطبيعة البيانات الجائعة للبيانات النماذج العصبية، تعتمد عروضها بقوة على أمثلة تدريبية واسعة النطاق، مما يحد من تطبيقها في تطبيقات العالم الحقيقي.لمعالجة هذا، نقترح إطارا جديدا: النموذج الأولي إلى إنشاء (P2G)، لجيل الجدول إلى النص تحت سيناريو القليل من اللقطات.يستخدم الإطار المقترح النماذج الأولية المستردة، التي تم اختيارها بشكل مشترك من قبل نظام IR ومحدد نموذج أولي جديد لمساعدة النموذج الذي سد الفجوة الهيكلية بين الجداول والنصوص.النتائج التجريبية على ثلاثة مجموعات بيانات معيار مع ثلاث نماذج أحدث من النماذج توضح أن الإطار المقترح يحسن بشكل كبير من أداء النموذج عبر مختلف مقاييس التقييم.
Neural table-to-text generation models have achieved remarkable progress on an array of tasks. However, due to the data-hungry nature of neural models, their performances strongly rely on large-scale training examples, limiting their applicability in real-world applications. To address this, we propose a new framework: Prototype-to-Generate (P2G), for table-to-text generation under the few-shot scenario. The proposed framework utilizes the retrieved prototypes, which are jointly selected by an IR system and a novel prototype selector to help the model bridging the structural gap between tables and texts. Experimental results on three benchmark datasets with three state-of-the-art models demonstrate that the proposed framework significantly improves the model performance across various evaluation metrics.
References used
https://aclanthology.org/
Providing pretrained language models with simple task descriptions in natural language enables them to solve some tasks in a fully unsupervised fashion. Moreover, when combined with regular learning from examples, this idea yields impressive few-shot
Few-shot table-to-text generation is a task of composing fluent and faithful sentences to convey table content using limited data. Despite many efforts having been made towards generating impressive fluent sentences by fine-tuning powerful pre-traine
Despite the enormous popularity of Translation Memory systems and the active research in the field, their language processing features still suffer from certain limitations. While many recent papers focus on semantic matching capabilities of TMs, thi
We propose a shared task on training instance selection for few-shot neural text generation. Large-scale pretrained language models have led to dramatic improvements in few-shot text generation. Nonetheless, almost all previous work simply applies ra
In this paper, we study the utilization of pre-trained language models to enable few-shotNatural Language Generation (NLG) in task-oriented dialog systems. We introduce a system consisting of iterative self-training and an extensible mini-template fr