استخدام البيانات من اختبارات المرنة الإنجليزية، والتي أبلغت فيها المواضيع ذاتها الذاتي عن جنسهن وعمرها والتعليم والعرق، ندرس اختلافات الأداء في نماذج اللغة المحددة مسبقا عبر المجموعات الديموغرافية، والتي تحددها هذه الصفات (المحمية).نوضح ثغرات أداء واسعة عبر الفئات الديموغرافية وإظهار أن نماذج اللغة المسبقة مسبقا تكافح المتحدثين ذكور الشباب غير الأبيض؛على سبيل المثال، لا تعلم نماذج اللغة المحددة مسبقا تعلم التحيزات الاجتماعية (الجمعيات النمطية) - تعلم النماذج اللغوية المحددة أيضا التحيزات الاجتماعية، وتعلم التحدث أكثر شيئين أكثر من مثل الآخرين.ومع ذلك، نوضح أنه، باستثناء نماذج بيرت، تخفض نماذج اللغة الأكبر المحددة مسبقا بعض فجوات الأداء بين الأغلبية والأقليات.
Using data from English cloze tests, in which subjects also self-reported their gender, age, education, and race, we examine performance differences of pretrained language models across demographic groups, defined by these (protected) attributes. We demonstrate wide performance gaps across demographic groups and show that pretrained language models systematically disfavor young non-white male speakers; i.e., not only do pretrained language models learn social biases (stereotypical associations) -- pretrained language models also learn sociolectal biases, learning to speak more like some than like others. We show, however, that, with the exception of BERT models, larger pretrained language models reduce some the performance gaps between majority and minority groups.
References used
https://aclanthology.org/
Transformer architecture has become ubiquitous in the natural language processing field. To interpret the Transformer-based models, their attention patterns have been extensively analyzed. However, the Transformer architecture is not only composed of
Existing work on probing of pretrained language models (LMs) has predominantly focused on sentence-level syntactic tasks. In this paper, we introduce document-level discourse probing to evaluate the ability of pretrained LMs to capture document-level
To obtain high-quality sentence embeddings from pretrained language models (PLMs), they must either be augmented with additional pretraining objectives or finetuned on a large set of labeled text pairs. While the latter approach typically outperforms
Paraphrase generation has benefited extensively from recent progress in the designing of training objectives and model architectures. However, previous explorations have largely focused on supervised methods, which require a large amount of labeled d
Abstract Consistency of a model---that is, the invariance of its behavior under meaning-preserving alternations in its input---is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language