وقد تبين أن الكثير من التقدم الأخير في NLU كان بسبب الاستدلال الخاصة بمواد بيانات التعلم من النماذج.نقوم بإجراء دراسة حالة للتعميم في NLI (من MNLI إلى مجموعة بيانات Hans التي شيدت عدسي) في مجموعة من الهيغات القائمة على Bert (محولات ومحولات سيامي و Deviasing Hex)، وكذلك مع إعانة البيانات وزيادة حجم النموذج.نبلغ 2 استراتيجيات ناجحة و 3 غير ناجحة، وكلها توفر رؤى في كيفية تعلم النماذج القائمة على المحولات التعميم.
Much of recent progress in NLU was shown to be due to models' learning dataset-specific heuristics. We conduct a case study of generalization in NLI (from MNLI to the adversarially constructed HANS dataset) in a range of BERT-based architectures (adapters, Siamese Transformers, HEX debiasing), as well as with subsampling the data and increasing the model size. We report 2 successful and 3 unsuccessful strategies, all providing insights into how Transformer-based models learn to generalize.
References used
https://aclanthology.org/
Although neural models have shown strong performance in datasets such as SNLI, they lack the ability to generalize out-of-distribution (OOD). In this work, we formulate a few-shot learning setup and examine the effects of natural language explanation
Understanding and executing natural language instructions in a grounded domain is one of the hallmarks of artificial intelligence. In this paper, we focus on instruction understanding in the blocks world domain and investigate the language understand
Multilingual language models exhibit better performance for some languages than for others (Singh et al., 2019), and many languages do not seem to benefit from multilingual sharing at all, presumably as a result of poor multilingual segmentation (Pyy
To build robust question answering systems, we need the ability to verify whether answers to questions are truly correct, not just good enough'' in the context of imperfect QA datasets. We explore the use of natural language inference (NLI) as a way
Although neural sequence-to-sequence models have been successfully applied to semantic parsing, they fail at compositional generalization, i.e., they are unable to systematically generalize to unseen compositions of seen components. Motivated by trad