نقدم تحليل خطأ في Taggers UPOS العصبية لتقييم سبب استخدام علامات الذهب هذه المساهمة الإيجابية الكبيرة في تحليل الأداء أثناء استخدام UPOS المتوقع إما للأداء أو يقدم تحسنا ضئيلا.نقوم أيضا بتقييم ما يتعلمه محلل التبعية العصبية ضمنيا حول أنواع الكلمات وكيف يتعلق الأمر بعملية الأخطاء التي يصنعها Taggers، لشرح التأثير الأدنى باستخدام علامات المتوقع على المحللين.ثم نقنع علامات UPOS بناء على أخطاء مصنوعة من Tagers لإطفاء مساهمة علامات UPOS تنجح Taggers وتفشل في تصنيفها بشكل صحيح وتأثير وضع العلامات على الأخطاء.
We present an error analysis of neural UPOS taggers to evaluate why using gold tags has such a large positive contribution to parsing performance while using predicted UPOS either harms performance or offers a negligible improvement. We also evaluate what neural dependency parsers implicitly learn about word types and how this relates to the errors taggers make, to explain the minimal impact using predicted tags has on parsers. We then mask UPOS tags based on errors made by taggers to tease away the contribution of UPOS tags that taggers succeed and fail to classify correctly and the impact of tagging errors.
References used
https://aclanthology.org/
Transformers-based pretrained language models achieve outstanding results in many well-known NLU benchmarks. However, while pretraining methods are very convenient, they are expensive in terms of time and resources. This calls for a study of the impa
In this paper, we investigate what types of stereotypical information are captured by pretrained language models. We present the first dataset comprising stereotypical attributes of a range of social groups and propose a method to elicit stereotypes
Style transfer aims to rewrite a source text in a different target style while preserving its content. We propose a novel approach to this task that leverages generic resources, and without using any task-specific parallel (source--target) data outpe
Improving model generalization on held-out data is one of the core objectives in common- sense reasoning. Recent work has shown that models trained on the dataset with superficial cues tend to perform well on the easy test set with superficial cues b
In machine reading comprehension tasks, a model must extract an answer from the available context given a question and a passage. Recently, transformer-based pre-trained language models have achieved state-of-the-art performance in several natural la