جزء أساسي من حركة الأخلاقيات NLP هو استخدام البيانات المسؤولة عن البيانات، ولكن بالضبط ما يعنيه ذلك أو كيف يمكن تحقيقه بشكل أفضل لا يزال غير واضح.تناقش ورقة الموضع هذه المبادئ القانونية والأخلاقية الأساسية لجمع البيانات النصية ومشاركتها، والتوترات بينهما.نقترح قائمة مرجعية محتملة لاستخدام البيانات المسؤولة (إعادة) والتي يمكن أن توحيد مراجعة النظراء لتقديمات المؤتمرات، وكذلك تمكين رؤية أكثر متعمقة للبحث المنشور في جميع أنحاء المجتمع.تهدف اقتراحنا إلى المساهمة في تطوير معيار متسق لاستخدام البيانات (إعادة)، واحتضنت مؤتمرات NLP.
A key part of the NLP ethics movement is responsible use of data, but exactly what that means or how it can be best achieved remain unclear. This position paper discusses the core legal and ethical principles for collection and sharing of textual data, and the tensions between them. We propose a potential checklist for responsible data (re-)use that could both standardise the peer review of conference submissions, as well as enable a more in-depth view of published research across the community. Our proposal aims to contribute to the development of a consistent standard for data (re-)use, embraced across NLP conferences.
References used
https://aclanthology.org/
Despite state-of-the-art performance, NLP systems can be fragile in real-world situations. This is often due to insufficient understanding of the capabilities and limitations of models and the heavy reliance on standard evaluation benchmarks. Researc
Gender bias is a frequent occurrence in NLP-based applications, especially pronounced in gender-inflected languages. Bias can appear through associations of certain adjectives and animate nouns with the natural gender of referents, but also due to un
SemEval is the primary venue in the NLP community for the proposal of new challenges and for the systematic empirical evaluation of NLP systems. This paper provides a systematic quantitative analysis of SemEval aiming to evidence the patterns of the
Most existing simultaneous machine translation (SiMT) systems are trained and evaluated on offline translation corpora. We argue that SiMT systems should be trained and tested on real interpretation data. To illustrate this argument, we propose an in
Adversarial attacks alter NLP model predictions by perturbing test-time inputs. However, it is much less understood whether, and how, predictions can be manipulated with small, concealed changes to the training data. In this work, we develop a new da