تستخدم Word Embeddings على نطاق واسع في معالجة اللغة الطبيعية (NLP) لمجموعة واسعة من التطبيقات. ومع ذلك، فقد ثبت باستمرار أن هذه المدينات تعكس نفس التحيزات البشرية الموجودة في البيانات المستخدمة لتدريبها. معظم مؤشرات التحيز المنصوص عليها للكشف عن تحيز Word Embeddings مؤشرات قائمة على أساس مقياس التشابه الجيبلي. في هذه الدراسة، ندرس آثار تدابير التشابه المختلفة وكذلك التقنيات الوصفية الأخرى أكثر من المتوسط في قياس تحيزات تضمين الكلمات السياقية وغير السياقية. نظهر أن حجم التحيزات المكشوفة في Word Embeddings يعتمد على تدابير الإحصاءات الوصفية والتشابه المستخدمة لقياس التحيز. وجدنا أنه خلال الفئات العشرة من اختبارات جمعية تضمين Word، تكشف مسافة Mahalanobis عن أصغر التحيز، وتكشف مسافة Euclidean عن أكبر تحيز في Word Ageddings. بالإضافة إلى ذلك، تكشف النماذج السياقية عن تحيزات أقل حدة من نماذج تضمين الكلمة غير السياقية.
Word embeddings are widely used in Natural Language Processing (NLP) for a vast range of applications. However, it has been consistently proven that these embeddings reflect the same human biases that exist in the data used to train them. Most of the introduced bias indicators to reveal word embeddings' bias are average-based indicators based on the cosine similarity measure. In this study, we examine the impacts of different similarity measures as well as other descriptive techniques than averaging in measuring the biases of contextual and non-contextual word embeddings. We show that the extent of revealed biases in word embeddings depends on the descriptive statistics and similarity measures used to measure the bias. We found that over the ten categories of word embedding association tests, Mahalanobis distance reveals the smallest bias, and Euclidean distance reveals the largest bias in word embeddings. In addition, the contextual models reveal less severe biases than the non-contextual word embedding models.
References used
https://aclanthology.org/
Language representations are known to carry stereotypical biases and, as a result, lead to biased predictions in downstream tasks. While existing methods are effective at mitigating biases by linear projection, such methods are too aggressive: they n
ProfNER-ST focuses on the recognition of professions and occupations from Twitter using Spanish data. Our participation is based on a combination of word-level embeddings, including pre-trained Spanish BERT, as well as cosine similarity computed over
For many NLP applications of online reviews, comparison of two opinion-bearing sentences is key. We argue that, while general purpose text similarity metrics have been applied for this purpose, there has been limited exploration of their applicabilit
In the last few years, several methods have been proposed to build meta-embeddings. The general aim was to obtain new representations integrating complementary knowledge from different source pre-trained embeddings thereby improving their overall qua
Word Embedding maps words to vectors of real numbers. It is derived from a large corpus and is known to capture semantic knowledge from the corpus. Word Embedding is a critical component of many state-of-the-art Deep Learning techniques. However, gen