عادة ما يتم تدريب النماذج العصبية الأكاديمية لدقة COMERASE (COREF) على مجموعة بيانات واحدة، ويتم قياس التحسينات النموذجية في نفس مجموعة البيانات.ومع ذلك، تعتمد تطبيقات COREF العالمية في العالم الحقيقي على المبادئ التوجيهية التوضيحي ومجال مجموعة البيانات المستهدفة، والتي تختلف غالبا عن تلك الموجودة في أونتونو.نحن نهدف إلى تحديد قابلية تحويل نماذج COREF بناء على عدد المستندات المشروح المتوفرة في مجموعة البيانات المستهدفة.نحن ندرس أحد عشر مجموعة بيانات مستهدفة ونتطلع إلى أن التدريب المستمر فعال باستمرار وفائدة بشكل خاص عندما تكون هناك عدد قليل من المستندات المستهدفة.نحن نؤسس معايير جديدة في العديد من مجموعات البيانات، بما في ذلك النتائج الحديثة على Preco.
Academic neural models for coreference resolution (coref) are typically trained on a single dataset, OntoNotes, and model improvements are benchmarked on that same dataset. However, real-world applications of coref depend on the annotation guidelines and the domain of the target dataset, which often differ from those of OntoNotes. We aim to quantify transferability of coref models based on the number of annotated documents available in the target dataset. We examine eleven target datasets and find that continued training is consistently effective and especially beneficial when there are few target documents. We establish new benchmarks across several datasets, including state-of-the-art results on PreCo.
References used
https://aclanthology.org/
Coreference resolution is key to many natural language processing tasks and yet has been relatively unexplored in Sign Language Processing. In signed languages, space is primarily used to establish reference. Solving coreference resolution for signed
Relating entities and events in text is a key component of natural language understanding. Cross-document coreference resolution, in particular, is important for the growing interest in multi-document analysis tasks. In this work we propose a new mod
In this paper, we present coreference resolution experiments with a newly created multilingual corpus CorefUD (Nedoluzhko et al.,2021). We focus on the following languages: Czech, Russian, Polish, German, Spanish, and Catalan. In addition to monoling
We point out that common evaluation practices for cross-document coreference resolution have been unrealistically permissive in their assumed settings, yielding inflated results. We propose addressing this issue via two evaluation methodology princip
We propose a neural event coreference model in which event coreference is jointly trained with five tasks: trigger detection, entity coreference, anaphoricity determination, realis detection, and argument extraction. To guide the learning of this com