التقييم للعديد من مهام فهم اللغة الطبيعية (NLU) مكسورة: النتيجة أنظمة غير موثوقة ومنحمة للغاية على المعايير القياسية التي توجد مساحة صغيرة للباحثين الذين يقومون بتطوير أنظمة أفضل لإظهار التحسينات الخاصة بهم.إن الاتجاه الأخير للتخلي عن معايير IID لصالح مجموعات الاختبارات التي تم إنشاؤها المشدة، خارج التوزيع تضمن أن النماذج الحالية ستؤدي بشكل سيء، ولكن في نهاية المطاف تحجب القدرات التي نريد قياس معاييرنا.في ورقة الموقف هذه، نضع أربعة معايير نجد أن معايير NLU يجب أن تلبي.نجرب أن معظم المعايير الحالية تفشل في هذه المعايير، وأن جمع البيانات العديفية لا يعالج سلبيا أسباب هذه الإخفاقات.بدلا من ذلك، سيتطلب استعادة النظام الإيكولوجي للتقييم الصحي تقدما ملحوظا في تصميم مجموعات البيانات القياسية، والموثوقية التي يتم عرضها معها، وحجمها، والطرق التي تتعاملون مع التحيز الاجتماعي.
Evaluation for many natural language understanding (NLU) tasks is broken: Unreliable and biased systems score so highly on standard benchmarks that there is little room for researchers who develop better systems to demonstrate their improvements. The recent trend to abandon IID benchmarks in favor of adversarially-constructed, out-of-distribution test sets ensures that current models will perform poorly, but ultimately only obscures the abilities that we want our benchmarks to measure. In this position paper, we lay out four criteria that we argue NLU benchmarks should meet. We argue most current benchmarks fail at these criteria, and that adversarial data collection does not meaningfully address the causes of these failures. Instead, restoring a healthy evaluation ecosystem will require significant progress in the design of benchmark datasets, the reliability with which they are annotated, their size, and the ways they handle social bias.
References used
https://aclanthology.org/
In this paper, we propose a definition and taxonomy of various types of non-standard textual content -- generally referred to as noise'' -- in Natural Language Processing (NLP). While data pre-processing is undoubtedly important in NLP, especially wh
This paper presents a production Semi-Supervised Learning (SSL) pipeline based on the student-teacher framework, which leverages millions of unlabeled examples to improve Natural Language Understanding (NLU) tasks. We investigate two questions relate
Model robustness to bias is often determined by the generalization on carefully designed out-of-distribution datasets. Recent debiasing methods in natural language understanding (NLU) improve performance on such datasets by pressuring models into mak
Medical question answering (QA) systems have the potential to answer clinicians' uncertainties about treatment and diagnosis on-demand, informed by the latest evidence. However, despite the significant progress in general QA made by the NLP community
Knowledge Distillation (KD) is a model compression algorithm that helps transfer the knowledge in a large neural network into a smaller one. Even though KD has shown promise on a wide range of Natural Language Processing (NLP) applications, little is