في حين أن الشبكات العصبية موجودة في كل مكان من المحللين الدلالي الحديث، فقد تبين أن معظم النماذج القياسية تعاني من خسائر أداء مثيرة عند مواجهة بيانات تكوين خارج التوزيع (OOD).في الآونة الأخيرة، تم اقتراح العديد من الطرق لتحسين التعميم التركيبي في التحليل الدلالي.في هذا العمل، نركز بدلا من ذلك على مشكلة الكشف عن أمثلة تكوين OOD مع المحللين الدلالي العصبي، والتي لم يتم التحقيق فيها من قبل.نحن نحقق في العديد من الطرق القوية ولكنها بسيطة للكشف عن ood بناء على عدم اليقين التنبؤية.توضح النتائج التجريبية أن هذه التقنيات تؤدي بشكل جيد في الفحص القياسي ومجموعات بيانات CFQ.علاوة على ذلك، نوضح أنه يمكن تحسين اكتشاف OOD باستخدام مجموعة غير متجانسة.
While neural networks are ubiquitous in state-of-the-art semantic parsers, it has been shown that most standard models suffer from dramatic performance losses when faced with compositionally out-of-distribution (OOD) data. Recently several methods have been proposed to improve compositional generalization in semantic parsing. In this work we instead focus on the problem of detecting compositionally OOD examples with neural semantic parsers, which, to the best of our knowledge, has not been investigated before. We investigate several strong yet simple methods for OOD detection based on predictive uncertainty. The experimental results demonstrate that these techniques perform well on the standard SCAN and CFQ datasets. Moreover, we show that OOD detection can be further improved by using a heterogeneous ensemble.
References used
https://aclanthology.org/
Pretrained Transformers achieve remarkable performance when training and test data are from the same distribution. However, in real-world scenarios, the model often faces out-of-distribution (OOD) instances that can cause severe semantic shift proble
The importance of building semantic parsers which can be applied to new domains and generate programs unseen at training has long been acknowledged, and datasets testing out-of-domain performance are becoming increasingly available. However, little o
Semantic parsing aims at translating natural language (NL) utterances onto machine-interpretable programs, which can be executed against a real-world environment. The expensive annotation of utterance-program pairs has long been acknowledged as a maj
In practical applications of semantic parsing, we often want to rapidly change the behavior of the parser, such as enabling it to handle queries in a new domain, or changing its predictions on certain targeted queries. While we can introduce new trai
After a neural sequence model encounters an unexpected token, can its behavior be predicted? We show that RNN and transformer language models exhibit structured, consistent generalization in out-of-distribution contexts. We begin by introducing two i