نستكشف استخدام نماذج لغة كبيرة مسببة الاحتراج كحلل دلالي أقل بالرصاص.الهدف في التحليل الدلالي هو توليد تمثيل معنى منظم بالنظر إلى إدخال لغة طبيعية.ومع ذلك، يتم تدريب نماذج اللغة لتوليد اللغة الطبيعية.لسد الفجوة، نستخدم نماذج اللغة لإعادة صياغة المدخلات في Sublanguage تسيطر يشبه اللغة الإنجليزية التي يمكن تعيينها تلقائيا إلى تمثيل معنى الهدف.توضح نتائجنا أنه مع كمية صغيرة فقط من البيانات والكود القليل جدا لتحويلها إلى تمثيلات تشبه اللغة الإنجليزية، يؤدي مخططنا لتحقيق البث الدلالي السريع إلى أداء فعال بشكل مدهش على مهام مجتمع متعددة، يتجاوز بشكل كبير أساليب خط الأساس المدربة أيضا على نفس المحدودةبيانات.
We explore the use of large pretrained language models as few-shot semantic parsers. The goal in semantic parsing is to generate a structured meaning representation given a natural language input. However, language models are trained to generate natural language. To bridge the gap, we use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation. Our results demonstrate that with only a small amount of data and very little code to convert into English-like representations, our blueprint for rapidly bootstrapping semantic parsers leads to surprisingly effective performance on multiple community tasks, greatly exceeding baseline methods also trained on the same limited data.
References used
https://aclanthology.org/
The benchmark performance of cross-database semantic parsing has climbed steadily in recent years, catalyzed by the wide adoption of pre-trained language models. Yet existing work have shown that state-of-the-art cross-database semantic parsers strug
General-purpose language models have demonstrated impressive capabilities, performing on par with state-of-the-art approaches on a range of downstream natural language processing (NLP) tasks and benchmarks when inferring instructions from very few ex
In this paper, we study the utilization of pre-trained language models to enable few-shotNatural Language Generation (NLG) in task-oriented dialog systems. We introduce a system consisting of iterative self-training and an extensible mini-template fr
In this paper we present a system that exploits different pre-trained Language Models for assigning domain labels to WordNet synsets without any kind of supervision. Furthermore, the system is not restricted to use a particular set of domain labels.
Pretrained language models (PTLMs) yield state-of-the-art performance on many natural language processing tasks, including syntax, semantics and commonsense. In this paper, we focus on identifying to what extent do PTLMs capture semantic attributes a