شهد حقل NLP مؤخرا زيادة كبيرة في العمل المتعلق بتكاثر النتائج، وأكثر اعترافا بشكل عام بأهمية وجود تعريفات وممارسات مشتركة تتعلق بالتقييم. وقد تركز الكثير من العمل على الاستيلاء على الدرجات المترية حتى الآن، مع استنساخ نتائج التقييم البشرية التي تتلقى اهتماما أقل بكثير. كجزء من برنامج بحثي مصمم لتطوير نظرية وممارسة تقييم استنساخ في NLP، نظمت المهمة المشتركة الأولى بشأن استنساخ التقييمات البشرية، وتوبيخ 2021. تصف هذه الورقة المهمة المشتركة بالتفصيل، تلخص النتائج من كل مجال من دراسات الاستنساخ قدمت، ويوفر المزيد من التحليل المقارن للنتائج. من بين تسع تسجيلات الفريق الأولية، تلقينا عروض من أربعة فرق. كشف التحليل التلوي لدراسات الاستنساخ الأربعة عن درجات متفاوتة من التكاثر، وسمحت باستنتاجات أولية مبدئية للغاية حول أنواع التقييم التي تميل إلى تحسين استنساخ أفضل.
The NLP field has recently seen a substantial increase in work related to reproducibility of results, and more generally in recognition of the importance of having shared definitions and practices relating to evaluation. Much of the work on reproducibility has so far focused on metric scores, with reproducibility of human evaluation results receiving far less attention. As part of a research programme designed to develop theory and practice of reproducibility assessment in NLP, we organised the first shared task on reproducibility of human evaluations, ReproGen 2021. This paper describes the shared task in detail, summarises results from each of the reproduction studies submitted, and provides further comparative analysis of the results. Out of nine initial team registrations, we received submissions from four teams. Meta-analysis of the four reproduction studies revealed varying degrees of reproducibility, and allowed very tentative first conclusions about what types of evaluation tend to have better reproducibility.
References used
https://aclanthology.org/
We present an overview of the SCIVER shared task, presented at the 2nd Scholarly Document Processing (SDP) workshop at NAACL 2021. In this shared task, systems were provided a scientific claim and a corpus of research abstracts, and asked to identify
In this paper, we introduce the Eval4NLP-2021 shared task on explainable quality estimation. Given a source-translation pair, this shared task requires not only to provide a sentence-level score indicating the overall quality of the translation, but
This paper provides an overview of the WANLP 2021 shared task on sarcasm and sentiment detection in Arabic. The shared task has two subtasks: sarcasm detection (subtask 1) and sentiment analysis (subtask 2). This shared task aims to promote and bring
This paper presents the results of the WMT21 Metrics Shared Task. Participants were asked to score the outputs of the translation systems competing in the WMT21 News Translation Task with automatic metrics on two different domains: news and TED talks
Common sense is an integral part of human cognition which allows us to make sound decisions, communicate effectively with others and interpret situations and utterances. Endowing AI systems with commonsense knowledge capabilities will help us get clo