توفر الشبكات العصبية القائمة على المحولات أداء تصنيف جيد للغاية عبر مجموعة واسعة من المجالات، لكن لا تقدم تفسيرات توقعاتها.في حين أن العديد من طرق التفسير، بما في ذلك الشكل، فإن معالجة مشكلة تفسير نماذج التعلم العميق، لا تتكيف معها للعمل على الشبكات العصبية القائمة على أحدث الأحوال مثل بيرت.مقرر آخر لهذه الطرق هو أن تصور التفسيرات الخاصة بهم في شكل قوائم من الكلمات الأكثر صلة لا يأخذ في الاعتبار الطبيعة المتسلسلة والهيكلية للنص.تقترح هذه الورقة طريقة TransShap التي تتكيف مع النماذج المحول بما في ذلك مصنفات النص المستند إلى BERT.تتقدم تصورات الشكل من خلال إظهار التفسيرات بطريقة متتالية، وتقييمها من قبل المقيمين البشري كمنافسة للحلول الحديثة.
Transformer-based neural networks offer very good classification performance across a wide range of domains, but do not provide explanations of their predictions. While several explanation methods, including SHAP, address the problem of interpreting deep learning models, they are not adapted to operate on state-of-the-art transformer-based neural networks such as BERT. Another shortcoming of these methods is that their visualization of explanations in the form of lists of most relevant words does not take into account the sequential and structurally dependent nature of text. This paper proposes the TransSHAP method that adapts SHAP to transformer models including BERT-based text classifiers. It advances SHAP visualizations by showing explanations in a sequential manner, assessed by human evaluators as competitive to state-of-the-art solutions.
References used
https://aclanthology.org/
Choosing the most suitable classifier in a linguistic context is a well-known problem in the production of Mandarin and many other languages. The present paper proposes a solution based on BERT, compares this solution to previous neural and rule-base
Abstract Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks. However, these models often have billions of parameters, and thus are too resource- hungry and computation-i
Inspired by Curriculum Learning, we propose a consecutive (i.e., image-to-text-to-text) generation framework where we divide the problem of radiology report generation into two steps. Contrary to generating the full radiology report from the image at
This paper describes models developed for the Social Media Mining for Health (SMM4H) 2021 shared tasks. Our team participated in the first subtask that classifies tweets with Adverse Drug Effect (ADE) mentions. Our best performing model utilizes BERT
We investigate how sentence-level transformers can be modified into effective sequence labelers at the token level without any direct supervision. Existing approaches to zero-shot sequence labeling do not perform well when applied on transformer-base