المعردات التفسيرية مجردة للتوقعات النموذجية هي حاسمة في التطبيقات العملية.نحن نطور النماذج العصبية التي تمتلك عملية استنتاجية مفسضة لتحليل التبعية.تتبنى نماذجنا الاستدلال المستند إلى المثيل، حيث يتم استخراج حواف التبعية ومسمى من خلال مقارنةها بالحواف في مجموعة تدريبية.يتم استخدام حواف التدريب صراحة للتنبؤات؛وبالتالي، من السهل فهم مساهمة كل حافة إلى التنبؤات.تظهر تجاربنا أن النماذج القائمة على مثيل لدينا تحقق دقة تنافسية مع النماذج العصبية القياسية ولديها المعقول المعقولة من التفسيرات القائمة على المثيل.
Abstract Interpretable rationales for model predictions are crucial in practical applications. We develop neural models that possess an interpretable inference process for dependency parsing. Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set. The training edges are explicitly used for the predictions; thus, it is easy to grasp the contribution of each edge to the predictions. Our experiments show that our instance-based models achieve competitive accuracy with standard neural models and have the reasonable plausibility of instance-based explanations.
References used
https://aclanthology.org/
The introduction of pre-trained transformer-based contextualized word embeddings has led to considerable improvements in the accuracy of graph-based parsers for frameworks such as Universal Dependencies (UD). However, previous works differ in various
Recent work has shown that monolingual masked language models learn to represent data-driven notions of language variation which can be used for domain-targeted training data selection. Dataset genre labels are already frequently available, yet remai
We review two features of mixture of experts (MoE) models which we call averaging and clustering effects in the context of graph-based dependency parsers learned in a supervised probabilistic framework. Averaging corresponds to the ensemble combinati
Most of the existing studies of language use in social media content have focused on the surface-level linguistic features (e.g., function words and punctuation marks) and the semantic level aspects (e.g., the topics, sentiment, and emotions) of the
This paper describes a methodology for syntactic knowledge transfer between high-resource languages to extremely low-resource languages. The methodology consists in leveraging multilingual BERT self-attention model pretrained on large datasets to dev