نحن نقدم SelfExPlain، وهو نموذج جديد يشرح ذاتيا يفسر تنبؤات تصنيف النص باستخدام المفاهيم القائمة على العبارة.SelfExplain تزويد الأقراص العصبية الموجودة من خلال إضافة (1) طبقة مخصصة عالمية تحدد المفاهيم الأكثر نفوذا في مجموعة التدريب لعينة معينة و (2) طبقة قابلة للتفسير محليا تهدئ مساهمة كل مفهوم إدخال محلي عن طريق الحوسبة درجة الأهميةبالنسبة إلى التسمية المتوقعة.تظهر التجارب عبر خمسة مجموعات بيانات تصنيف نصية أن SelfExPlain يسهل الترجمة الشفوية دون التضحية بالأداء.الأهم من ذلك، تفسيرات من إظهار نفس إظهار الكفاية عن التنبؤات النموذجية وترى أنها كافية وجديرة بالثقة ومفهومة من قبل القضاة البشرية مقارنة مع خطوط الأساس المستخدمة على نطاق واسع.
We introduce SelfExplain, a novel self-explaining model that explains a text classifier's predictions using phrase-based concepts. SelfExplain augments existing neural classifiers by adding (1) a globally interpretable layer that identifies the most influential concepts in the training set for a given sample and (2) a locally interpretable layer that quantifies the contribution of each local input concept by computing a relevance score relative to the predicted label. Experiments across five text-classification datasets show that SelfExplain facilitates interpretability without sacrificing performance. Most importantly, explanations from SelfExplain show sufficiency for model predictions and are perceived as adequate, trustworthy and understandable by human judges compared to existing widely-used baselines.
References used
https://aclanthology.org/
We propose a novel framework to train models to classify acceptability of responses generated by natural language generation (NLG) models, improving upon existing sentence transformation and model-based approaches. An NLG response is considered accep
Deep neural networks are vulnerable to adversarial attacks, where a small perturbation to an input alters the model prediction. In many cases, malicious inputs intentionally crafted for one model can fool another model. In this paper, we present the
Understanding how linguistic structure is encoded in contextualized embedding could help explain their impressive performance across NLP. Existing approaches for probing them usually call for training classifiers and use the accuracy, mutual informat
Conditional text generation often requires lexical constraints, i.e., which words should or shouldn't be included in the output text. While the dominant recipe for conditional text generation has been large-scale pretrained language models that are f
We propose a shared task on training instance selection for few-shot neural text generation. Large-scale pretrained language models have led to dramatic improvements in few-shot text generation. Nonetheless, almost all previous work simply applies ra