أصبحت نماذج لغة كبيرة مسببة الاحترام باستخدام بنية الشبكة العصبية المحولات هي منهجية مهيمنة للعديد من مهام معالجة اللغة الطبيعية، مثل الإجابة على الأسئلة، تصنيف النص، غموض معنى الكلمة، إكمال النص والترجمة الآلية. عادة ما تضم مئات الملايين من المعلمات، فإن هذه النماذج تقدم أداء حديثة، ولكن على حساب قابلية الترجمة الشفوية. آلية الاهتمام هي العنصر الرئيسي لشبكات المحولات. نقوم بتقديم Attviz، وهي طريقة لاستكشاف اهتمام الذات في شبكات المحولات، والتي يمكن أن تساعد في تفسير وتصحيح الأخطاء من النماذج المدربة من خلال إظهار الجمعيات بين الرموز النصية في تسلسل الإدخال. نظهر أن خطوط أنابيب التعلم العميق الحالية يمكن استكشافها مع Attviz، والذي يوفر تصورات رواية لرؤوس الانتباه وتجميعها. نفذنا الأساليب المقترحة في مجموعة أدوات عبر الإنترنت ومكتبة دون اتصال. باستخدام أمثلة من تحليل الأخبار، نوضح كيف يمكن استخدام ATVIZ للتفتيش والحدوث على فهم أفضل ما تعلمه النموذج.
Large pretrained language models using the transformer neural network architecture are becoming a dominant methodology for many natural language processing tasks, such as question answering, text classification, word sense disambiguation, text completion and machine translation. Commonly comprising hundreds of millions of parameters, these models offer state-of-the-art performance, but at the expense of interpretability. The attention mechanism is the main component of transformer networks. We present AttViz, a method for exploration of self-attention in transformer networks, which can help in explanation and debugging of the trained models by showing associations between text tokens in an input sequence. We show that existing deep learning pipelines can be explored with AttViz, which offers novel visualizations of the attention heads and their aggregations. We implemented the proposed methods in an online toolkit and an offline library. Using examples from news analysis, we demonstrate how AttViz can be used to inspect and potentially better understand what a model has learned.
References used
https://aclanthology.org/
After a neural sequence model encounters an unexpected token, can its behavior be predicted? We show that RNN and transformer language models exhibit structured, consistent generalization in out-of-distribution contexts. We begin by introducing two i
Contextualized representations based on neural language models have furthered the state of the art in various NLP tasks. Despite its great success, the nature of such representations remains a mystery. In this paper, we present an empirical property
The use of attention mechanisms in deep learning approaches has become popular in natural language processing due to its outstanding performance. The use of these mechanisms allows one managing the importance of the elements of a sequence in accordan
High-performance neural language models have obtained state-of-the-art results on a wide range of Natural Language Processing (NLP) tasks. However, results for common benchmark datasets often do not reflect model reliability and robustness when appli
Transformer architecture has become ubiquitous in the natural language processing field. To interpret the Transformer-based models, their attention patterns have been extensively analyzed. However, the Transformer architecture is not only composed of