تجرب الكثير من سكان العالم بعض أشكال الإعاقة خلال حياتهم.يجب ممارسة الحذر أثناء تصميم أنظمة معالجة اللغة الطبيعية (NLP) لمنع الأنظمة من إدامة التحيز القوي عن غير قصد ضد الأشخاص ذوي الإعاقة، أي الإخلال بفضل القدرات النموذجية.نبلغ عن التحليلات المختلفة بناء على تنبؤات كلمة لنموذج لغة بيرت واسعة النطاق.تظهر النتائج ذات دلالة إحصائية أن الأشخاص ذوي الإعاقة يمكنين محرومين.استكشاف النتائج أيضا أشكال التمييز المتداخلة المتعلقة بالجنس المترابط والهويات العرقية.
Much of the world's population experiences some form of disability during their lifetime. Caution must be exercised while designing natural language processing (NLP) systems to prevent systems from inadvertently perpetuating ableist bias against people with disabilities, i.e., prejudice that favors those with typical abilities. We report on various analyses based on word predictions of a large-scale BERT language model. Statistically significant results demonstrate that people with disabilities can be disadvantaged. Findings also explore overlapping forms of discrimination related to interconnected gender and race identities.
References used
https://aclanthology.org/
HCI and NLP traditionally focus on different evaluation methods. While HCI involves a small number of people directly and deeply, NLP traditionally relies on standardized benchmark evaluations that involve a larger number of people indirectly. We pre
Question answering (QA) models use retriever and reader systems to answer questions. Reliance on training data by QA systems can amplify or reflect inequity through their responses. Many QA models, such as those for the SQuAD dataset, are trained and
Natural Language Processing (NLP) is defined by specific, separate tasks, with each their own literature, benchmark datasets, and definitions. In this position paper, we argue that for a complex problem such as the threat to democracy by non-diverse
Identifying relevant knowledge to be used in conversational systems that are grounded in long documents is critical to effective response generation. We introduce a knowledge identification model that leverages the document structure to provide dialo
We analyze 6.7 million case law documents to determine the presence of gender bias within our judicial system. We find that current bias detection methods in NLP are insufficient to determine gender bias in our case law database and propose an altern