نشير إلى أن ممارسات التقييم المشتركة لحل النواة المستندات كانت غير متسقة بشكل غير واقعي في إعداداتها المفترضة، مما أثار نتائج تضخيم.نقترح معالج هذه المشكلة عبر مبادئ منهجية التقييم.أولا، كما هو الحال في المهام الأخرى، ينبغي تقييم النماذج بشأن الأهمية المتوقعة بدلا من الإشراهات الذهبية.القيام بذلك يثير قضية خفية فيما يتعلق بمجموعات Singleton Comeference، والتي نتعلمها عن طريق فصل تقييم التقييم للكشف عن ربط COMELES.ثانيا، نجرب أن النماذج يجب ألا تستغل بنية الموضوع الاصطناعي لمجموعة بيانات البنك المركزي الأوروبي + المعيار، مما أجبر النماذج لمواجهة تحدي الغموض المعجمي، على النحو المقصود من قبل منشئو البيانات.نوضح بالتجريد التأثير الجذراني لمبادئ تقييمنا الأكثر واقعية لدينا على نموذج تنافسي، مما يؤدي إلى درجة أقل 33 F1 مقارنة بتقييم الممارسات اللانوية السابقة.
We point out that common evaluation practices for cross-document coreference resolution have been unrealistically permissive in their assumed settings, yielding inflated results. We propose addressing this issue via two evaluation methodology principles. First, as in other tasks, models should be evaluated on predicted mentions rather than on gold mentions. Doing this raises a subtle issue regarding singleton coreference clusters, which we address by decoupling the evaluation of mention detection from that of coreference linking. Second, we argue that models should not exploit the synthetic topic structure of the standard ECB+ dataset, forcing models to confront the lexical ambiguity challenge, as intended by the dataset creators. We demonstrate empirically the drastic impact of our more realistic evaluation principles on a competitive model, yielding a score which is 33 F1 lower compared to evaluating by prior lenient practices.
References used
https://aclanthology.org/
Relating entities and events in text is a key component of natural language understanding. Cross-document coreference resolution, in particular, is important for the growing interest in multi-document analysis tasks. In this work we propose a new mod
We propose a neural event coreference model in which event coreference is jointly trained with five tasks: trigger detection, entity coreference, anaphoricity determination, realis detection, and argument extraction. To guide the learning of this com
This paper studies the problem of cross-document event coreference resolution (CDECR) that seeks to determine if event mentions across multiple documents refer to the same real-world events. Prior work has demonstrated the benefits of the predicate-a
Semantic textual similarity (STS) systems estimate the degree of the meaning similarity between two sentences. Cross-lingual STS systems estimate the degree of the meaning similarity between two sentences, each in a different language. State-of-the-a
We study a new problem of cross-lingual transfer learning for event coreference resolution (ECR) where models trained on data from a source language are adapted for evaluations in different target languages. We introduce the first baseline model for