البيانات القوية وبأسعار معقولة في المجال هي أصل مرغوب فيه عند نقل المحللين الدلاليين المدربين إلى مجالات جديدة.نظرا لأن الأساليب السابقة لإنشاء مثل هذه البيانات نصف تلقائيا لا يمكن أن تتعامل مع تعقيد استفسارات SQL الواقعية، نقترح بناء استفسارات SQL عبر أخذ العينات التي تعتمد على السياق، وقدم مفهوم الموضوع.جنبا إلى جنب مع طريقة البناء الخاصة بنا SQL، نقترح خط أنابيب رواية من إنشاء بيانات النصوص شبه التلقائي Text-to-sql تغطي مساحة واسعة من استعلامات SQL.نظهر أن مجموعة البيانات التي تم إنشاؤها قابلة للمقارنة مع شروح الخبراء على طول أبعاد متعددة، وهي قادرة على تحسين أداء نقل المجال لمحلل سوتا الدلالي.
Strong and affordable in-domain data is a desirable asset when transferring trained semantic parsers to novel domains. As previous methods for semi-automatically constructing such data cannot handle the complexity of realistic SQL queries, we propose to construct SQL queries via context-dependent sampling, and introduce the concept of topic. Along with our SQL query construction method, we propose a novel pipeline of semi-automatic Text-to-SQL dataset construction that covers the broad space of SQL queries. We show that the created dataset is comparable with expert annotation along multiple dimensions, and is capable of improving domain transfer performance for SOTA semantic parsers.
References used
https://aclanthology.org/
Learning to capture text-table alignment is essential for tasks like text-to-SQL. A model needs to correctly recognize natural language references to columns and values and to ground them in the given database schema. In this paper, we present a nove
Most available semantic parsing datasets, comprising of pairs of natural utterances and logical forms, were collected solely for the purpose of training and evaluation of natural language understanding systems. As a result, they do not contain any of
Recent neural text-to-SQL models can effectively translate natural language questions to corresponding SQL queries on unseen databases. Working mostly on the Spider dataset, researchers have proposed increasingly sophisticated solutions to the proble
Recently, a large pre-trained language model called T5 (A Unified Text-to-Text Transfer Transformer) has achieved state-of-the-art performance in many NLP tasks. However, no study has been found using this pre-trained model on Text Simplification. Th
Recent studies have shown that prompts improve the performance of large pre-trained language models for few-shot text classification. Yet, it is unclear how the prompting knowledge can be transferred across similar NLP tasks for the purpose of mutual