دفعت الشبكات العصبية العميقة باستمرار الأداء الحديث في معالجة اللغة الطبيعية ويعتبر نهج النمذجة في الواقع في حل مهام NLP المعقدة مثل الترجمة الآلية والتلخيص والرد على السؤال. على الرغم من الفعالية المثبتة للشبكات العصبية العميقة، فإن معرضهم هو سبب رئيسي للقلق. في هذا البرنامج التعليمي، سنقدم أعمال البحث في تفسير مكونات الرواية الدقيقة لطراز الشبكة العصبية من وجهات نظر، ط) ترجمة التفسير الدقيقة، والثاني) تحليل السببية. السابق هو فئة من الأساليب لتحليل الخلايا العصبية فيما يتعلق بمفهوم اللغة المطلوب أو مهمة. تدرس الأخير دور الخلايا العصبية وميزات المدخلات في شرح القرارات التي اتخذها النموذج. سنناقش أيضا كيف يمكن لأساليب التفسير وتحليل السببية الاتصال بتثبيته بشكل أفضل لتنبؤ النموذج. أخيرا، سوف نسير إليك من خلال مجموعة أدوات مختلفة تسهل تحليل التفسير والسبابة الراسخة من النماذج العصبية.
Deep neural networks have constantly pushed the state-of-the-art performance in natural language processing and are considered as the de-facto modeling approach in solving complex NLP tasks such as machine translation, summarization and question-answering. Despite the proven efficacy of deep neural networks at-large, their opaqueness is a major cause of concern. In this tutorial, we will present research work on interpreting fine-grained components of a neural network model from two perspectives, i) fine-grained interpretation, and ii) causation analysis. The former is a class of methods to analyze neurons with respect to a desired language concept or a task. The latter studies the role of neurons and input features in explaining the decisions made by the model. We will also discuss how interpretation methods and causation analysis can connect towards better interpretability of model prediction. Finally, we will walk you through various toolkits that facilitate fine-grained interpretation and causation analysis of neural models.
References used
https://aclanthology.org/
Abstract Debugging a machine learning model is hard since the bug usually involves the training data and the learning process. This becomes even harder for an opaque deep learning model if we have no clue about how the model actually works. In this s
BERTScore, a recently proposed automatic metric for machine translation quality, uses BERT, a large pre-trained language model to evaluate candidate translations with respect to a gold translation. Taking advantage of BERT's semantic and syntactic ab
This technical report aims at the ROCLING 2021 Shared Task: Dimensional Sentiment Analysis for Educational Texts. In order to predict the affective states of Chinese educational texts, we present a practical framework by employing pre-trained languag
Adversarial attacks alter NLP model predictions by perturbing test-time inputs. However, it is much less understood whether, and how, predictions can be manipulated with small, concealed changes to the training data. In this work, we develop a new da
The task of automatic diagnosis encoding into standard medical classifications and ontologies, is of great importance in medicine - both to support the daily tasks of physicians in the preparation and reporting of clinical documentation, and for auto