تركز المهمة المشتركة على تقييم الدقة على التقنيات (كلا اليدين والآلية) لتقييم الدقة الواقعية للنصوص التي تنتجها أنظمة NLG العصبية، في مجال التقارير الرياضية.قدم أربعة فرق تقنيات التقييم لهذه المهمة، باستخدام نهج وتقنيات مختلفة للغاية.طلبت التقديمات الأفضل أداء جيدا في هذه المهمة الصعبة.ومع ذلك، تكافح جميع التقديمات التلقائية للكشف عن الأخطاء الواقعية المعقدة دلالة أو بشكل غير رسمي (على سبيل المثال، بناء على حساب أو استنتاج غير صحيح).
The Shared Task on Evaluating Accuracy focused on techniques (both manual and automatic) for evaluating the factual accuracy of texts produced by neural NLG systems, in a sports-reporting domain. Four teams submitted evaluation techniques for this task, using very different approaches and techniques. The best-performing submissions did encouragingly well at this difficult task. However, all automatic submissions struggled to detect factual errors which are semantically or pragmatically complex (for example, based on incorrect computation or inference).
References used
https://aclanthology.org/
This paper presents the results of the WMT21 Metrics Shared Task. Participants were asked to score the outputs of the translation systems competing in the WMT21 News Translation Task with automatic metrics on two different domains: news and TED talks
Shared tasks have a long history and have become the mainstream of NLP research. Most of the shared tasks require participants to submit only system outputs and descriptions. It is uncommon for the shared task to request submission of the system itse
Language domains that require very careful use of terminology are abundant and reflect a significant part of the translation industry. In this work we introduce a benchmark for evaluating the quality and consistency of terminology translation, focusi
In this paper, we introduce the Eval4NLP-2021 shared task on explainable quality estimation. Given a source-translation pair, this shared task requires not only to provide a sentence-level score indicating the overall quality of the translation, but
We hereby present our submission to the Shared Task in Evaluating Accuracy at the INLG 2021 Conference. Our evaluation protocol relies on three main components; rules and text classifiers that pre-annotate the dataset, a human annotator that validate