نحن نبحث عن مشكلة الاختلال العظيمة في أبحاث معالجة اللغات الطبيعية، وهذا يعني ببساطة أن تعريف المشكلة لا يتماشى مع الطريقة المقترحة والتقييم البشري لا يتماشى مع التعريف ولا الطريقة.ندرس هذه المشكلة الاختلافة من خلال مسح 10 ورقات أخذ عينات عشوائيا منشورة في ACL 2020 التي ينتج عنها التقرير البشري.تظهر نتائجنا أن ورقة واحدة فقط كانت متوافقة تماما من حيث تعريف المشكلة والطريقة والتقييم.قدمت ورقتين فقط تقييم بشري يتماشى مع ما تم تصميمه في الطريقة.وتسليط الضوء على هذه النتائج أن مشكلة الاختلافة العظيمة هي رئيسة كبيرة وتؤثر على صحة وتطوير النتائج التي حصل عليها تقييم بشري.
We outline the Great Misalignment Problem in natural language processing research, this means simply that the problem definition is not in line with the method proposed and the human evaluation is not in line with the definition nor the method. We study this misalignment problem by surveying 10 randomly sampled papers published in ACL 2020 that report results with human evaluation. Our results show that only one paper was fully in line in terms of problem definition, method and evaluation. Only two papers presented a human evaluation that was in line with what was modeled in the method. These results highlight that the Great Misalignment Problem is a major one and it affects the validity and reproducibility of results obtained by a human evaluation.
References used
https://aclanthology.org/
In human-level NLP tasks, such as predicting mental health, personality, or demographics, the number of observations is often smaller than the standard 768+ hidden state sizes of each layer within modern transformer-based language models, limiting th
Modern Natural Language Processing (NLP) makes intensive use of deep learning methods because of the accuracy they offer for a variety of applications. Due to the significant environmental impact of deep learning, cost-benefit analysis including carb
This paper reviews and summarizes human evaluation practices described in 97 style transfer papers with respect to three main evaluation aspects: style transfer, meaning preservation, and fluency. In principle, evaluations by human raters should be t
Despite state-of-the-art performance, NLP systems can be fragile in real-world situations. This is often due to insufficient understanding of the capabilities and limitations of models and the heavy reliance on standard evaluation benchmarks. Researc
HCI and NLP traditionally focus on different evaluation methods. While HCI involves a small number of people directly and deeply, NLP traditionally relies on standardized benchmark evaluations that involve a larger number of people indirectly. We pre