ﻻ يوجد ملخص باللغة العربية
A new method for Text-to-SQL parsing, Grammar Pre-training (GP), is proposed to decode deep relations between question and database. Firstly, to better utilize the information of databases, a random value is added behind a question word which is recognized as a column, and the new sentence serves as the model input. Secondly, initialization of vectors for decoder part is optimized, with reference to the former encoding so that question information can be concerned. Finally, a new approach called flooding level is adopted to get the non-zero training loss which can generalize better results. By encoding the sentence with GRAPPA and RAT-SQL model, we achieve better performance on spider, a cross-DB Text-to-SQL dataset (72.8 dev, 69.8 test). Experiments show that our method is easier to converge during training and has excellent robustness.
When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas. The generalization challenge lies in (a) encoding the database r
We study the pre-train + fine-tune strategy for data-to-text tasks. Our experiments indicate that text-to-text pre-training in the form of T5, enables simple, end-to-end transformer based models to outperform pipelined neural architectures tailored f
We present GraPPa, an effective pre-training approach for table semantic parsing that learns a compositional inductive bias in the joint representations of textual and tabular data. We construct synthetic question-SQL pairs over high-quality tables v
Since we can leverage a large amount of unlabeled data without any human supervision to train a model and transfer the knowledge to target tasks, self-supervised learning is a de-facto component for the recent success of deep learning in various fiel
This paper presents a new sequence-to-sequence (seq2seq) pre-training method PoDA (Pre-training of Denoising Autoencoders), which learns representations suitable for text generation tasks. Unlike encoder-only (e.g., BERT) or decoder-only (e.g., OpenA