نستخدم مجموعة بيانات من الأسماء الأولى الأمريكية مع ملصقات تستند إلى النوع الاجتماعي السائد والمجموعة العرقية لفحص تأثير تواتر Corpus على التقييم والسياق والتشابه إلى التمثيل الأولي والتحيز في Bert و GPT-2 و T5 و XLNet. نظهر أن الأسماء الأكثر في الغالب والأسماء غير البيضاء أقل تواترا في شركة التدريب لهذه النماذج الأربع هذه. نجد أن الأسماء النادرة هي أكثر مماثلة ذاتيا عبر السياقات، مع Rho Spearman بين التردد والتشابه الذاتي بنسبة منخفضة تصل إلى 763. الأسماء النادرة هي أيضا أقل تشبه التمثيل الأولي، مع تشابه RHO ل Spearman بين التردد ومحاذاة النواة الخطية (CKA) للتمثيل الأولي بما يصل إلى .702. علاوة على ذلك، نجد Rho Spearman بين التحيز العنصري وتكرار الاسم في Bert of .492، مما يشير إلى أن أسماء مجموعات الأقليات ذات التردد الأدنى مرتبطون ببراعة. تخضع تمثيل الأسماء النادرة لمعالجة المزيد من المعالجة، ولكنها أكثر مماثلة ذاتيا، مما يشير إلى أن النماذج تعتمد على تمثيل أقل مستنيرة في السياق بأسماء غير شائعة وأسماء الأقليات التي يتم إجاءاتها على عدد أقل من السياقات الملحوظة.
We use a dataset of U.S. first names with labels based on predominant gender and racial group to examine the effect of training corpus frequency on tokenization, contextualization, similarity to initial representation, and bias in BERT, GPT-2, T5, and XLNet. We show that predominantly female and non-white names are less frequent in the training corpora of these four language models. We find that infrequent names are more self-similar across contexts, with Spearman's rho between frequency and self-similarity as low as -.763. Infrequent names are also less similar to initial representation, with Spearman's rho between frequency and linear centered kernel alignment (CKA) similarity to initial representation as high as .702. Moreover, we find Spearman's rho between racial bias and name frequency in BERT of .492, indicating that lower-frequency minority group names are more associated with unpleasantness. Representations of infrequent names undergo more processing, but are more self-similar, indicating that models rely on less context-informed representations of uncommon and minority names which are overfit to a lower number of observed contexts.
References used
https://aclanthology.org/
Taxonomies are symbolic representations of hierarchical relationships between terms or entities. While taxonomies are useful in broad applications, manually updating or maintaining them is labor-intensive and difficult to scale in practice. Conventio
One of the central aspects of contextualised language models is that they should be able to distinguish the meaning of lexically ambiguous words by their contexts. In this paper we investigate the extent to which the contextualised embeddings of word
Relational knowledge bases (KBs) are commonly used to represent world knowledge in machines. However, while advantageous for their high degree of precision and interpretability, KBs are usually organized according to manually-defined schemas, which l
The success of large-scale contextual language models has attracted great interest in probing what is encoded in their representations. In this work, we consider a new question: to what extent contextual representations of concrete nouns are aligned
Nowadays, social media platforms use classification models to cope with hate speech and abusive language. The problem of these models is their vulnerability to bias. A prevalent form of bias in hate speech and abusive language datasets is annotator b