دفعت نماذج لغة واسعة النطاق مثل إلمو وفيرت أفق ما هو ممكن في وضع العلامات الدلالية (SRL)، وحل مشكلة خارج المفردات وتمكين النظم المناسبة، لكنها قدمت أيضا تحيزات كبيرة وبعد نقيم ثلاثة محللين SRL حول جمل متعدية بسيطة للغاية مع الأفعال عادة ما يرتبط عادة الموضوعات والكائنات، مثل، ماري بابيسات توم '': محلل حديثة على أساس بيرت، محيط كبير سنا بناء على القفازات، و محلل أكبر سنا من قبل أيام embeddings. عندما تستخدم الوسائط أشكالا في الغالب تستخدم كأسماء شخص، محاذاة توقعات الحس السليم في الرسوم المتحركة، فإن المحلل المحلل القائم على Bert غير متفائل. ومع ذلك، مع التجريدية أو الأسماء العشوائية، تظهر الصورة المعاكسة. نشير إلى هذا كحساس شعور مشترك، وتقديم مجموعة بيانات تحدي لتقييم مدى حساسية المحللين لهذا التحيز. قواعد البيانات الخاصة بنا والتحدي متوفرة هنا: github.com/coastalcph/comte
Large-scale language models such as ELMo and BERT have pushed the horizon of what is possible in semantic role labeling (SRL), solving the out-of-vocabulary problem and enabling end-to-end systems, but they have also introduced significant biases. We evaluate three SRL parsers on very simple transitive sentences with verbs usually associated with animate subjects and objects, such as, Mary babysat Tom'': a state-of-the-art parser based on BERT, an older parser based on GloVe, and an even older parser from before the days of word embeddings. When arguments are word forms predominantly used as person names, aligning with common sense expectations of animacy, the BERT-based parser is unsurprisingly superior; yet, with abstract or random nouns, the opposite picture emerges. We refer to this as common sense bias'' and present a challenge dataset for evaluating the extent to which parsers are sensitive to such a bias. Our code and challenge dataset are available here: github.com/coastalcph/comte
References used
https://aclanthology.org/
Active learning has been shown to reduce annotation requirements for numerous natural language processing tasks, including semantic role labeling (SRL). SRL involves labeling argument spans for potentially multiple predicates in a sentence, which mak
In this work, we empirically compare span extraction methods for the task of semantic role labeling (SRL). While recent progress incorporating pre-trained contextualized representations into neural encoders has greatly improved SRL F1 performance on
While FrameNet is widely regarded as a rich resource of semantics in natural language processing, a major criticism concerns its lack of coverage and the relative paucity of its labeled data compared to other commonly used lexical resources such as P
Although recent developments in neural architectures and pre-trained representations have greatly increased state-of-the-art model performance on fully-supervised semantic role labeling (SRL), the task remains challenging for languages where supervis
Although neural sequence-to-sequence models have been successfully applied to semantic parsing, they fail at compositional generalization, i.e., they are unable to systematically generalize to unseen compositions of seen components. Motivated by trad