يمكن أن تسبب التحيزات والتحف في البيانات التدريبية في سلوك غير مرحب به في نصوص النص (مثل مطابقة النمط الضحل)، مما يؤدي إلى عدم القدرة على التعميم.أحد الحلول لهذه المشكلة هو إدراج المستخدمين في الحلقة والاستفادة تعليقاتهم لتحسين النماذج.نقترح خط أنابيب تصحيح توضيحي جديد يسمى Hildif، مما يتيح البشر لتحسين أقراص نصية عميقة باستخدام وظائف التأثير كطريقة تفسير.نقوم بتجربة مهمة الاستدلال باللغة الطبيعية (NLI)، والتي تبين أن Hildif يمكن أن تخفف من مشاكل القطع الأثرية بشكل فعال في نماذج بيرت التي تم ضبطها بشكل جيد وتؤدي إلى زيادة التعميم النموذجي.
Biases and artifacts in training data can cause unwelcome behavior in text classifiers (such as shallow pattern matching), leading to lack of generalizability. One solution to this problem is to include users in the loop and leverage their feedback to improve models. We propose a novel explanatory debugging pipeline called HILDIF, enabling humans to improve deep text classifiers using influence functions as an explanation method. We experiment on the Natural Language Inference (NLI) task, showing that HILDIF can effectively alleviate artifact problems in fine-tuned BERT models and result in increased model generalizability.
References used
https://aclanthology.org/
Abstract Debugging a machine learning model is hard since the bug usually involves the training data and the learning process. This becomes even harder for an opaque deep learning model if we have no clue about how the model actually works. In this s
To build robust question answering systems, we need the ability to verify whether answers to questions are truly correct, not just good enough'' in the context of imperfect QA datasets. We explore the use of natural language inference (NLI) as a way
We present an interactive Plotting Agent, a system that enables users to directly manipulate plots using natural language instructions within an interactive programming environment. The Plotting Agent maps language to plot updates. We formulate this
Despite the increasingly good quality of Machine Translation (MT) systems, MT outputs require corrections. Automatic Post-Editing (APE) models have been introduced to perform these corrections without human intervention. However, no system has been a
Grammatical error correction (GEC) suffers from a lack of sufficient parallel data. Studies on GEC have proposed several methods to generate pseudo data, which comprise pairs of grammatical and artificially produced ungrammatical sentences. Currently