في هذه الورقة، يمكننا التحقيق في أنواع المعلومات النمطية التي يتم التقاطها عن طريق نماذج اللغة المحددة مسبقا.نقدم بيانات البيانات الأولى التي تشمل السمات النمطية لمجموعة من المجموعات الاجتماعية واقتراح طريقة لاستزاز الصور النمطية المشفرة من قبل نماذج اللغة المحددة في أزياء غير منشأة.علاوة على ذلك، نربط النمط النمطية الناشئة على مظاهرهم كعاطرات أساسية كوسيلة لدراسة آثارهم العاطفية بطريقة أكثر تعميم.لإظهار كيف يمكن استخدام أساليبنا لتحليل نوبات المشاعر والنمطية بسبب التجربة اللغوية، نستخدم ضبطها بشكل جيد على مصادر الأخبار كدراسة حالة.تعرض تجاربنا كيف تختلف المواقف تجاه مجموعات اجتماعية مختلفة عبر النماذج وكيف يمكن أن تحول العواطف والقوالب النمطية بسرعة في مرحلة ضبط الدقيقة.
In this paper, we investigate what types of stereotypical information are captured by pretrained language models. We present the first dataset comprising stereotypical attributes of a range of social groups and propose a method to elicit stereotypes encoded by pretrained language models in an unsupervised fashion. Moreover, we link the emergent stereotypes to their manifestation as basic emotions as a means to study their emotional effects in a more generalized manner. To demonstrate how our methods can be used to analyze emotion and stereotype shifts due to linguistic experience, we use fine-tuning on news sources as a case study. Our experiments expose how attitudes towards different social groups vary across models and how quickly emotions and stereotypes can shift at the fine-tuning stage.
References used
https://aclanthology.org/
General-purpose language models have demonstrated impressive capabilities, performing on par with state-of-the-art approaches on a range of downstream natural language processing (NLP) tasks and benchmarks when inferring instructions from very few ex
GPT-3 shows remarkable in-context learning ability of large-scale language models (LMs) trained on hundreds of billion scale data. Here we address some remaining issues less reported by the GPT-3 paper, such as a non-English LM, the performances of d
Transformers-based pretrained language models achieve outstanding results in many well-known NLU benchmarks. However, while pretraining methods are very convenient, they are expensive in terms of time and resources. This calls for a study of the impa
Style transfer aims to rewrite a source text in a different target style while preserving its content. We propose a novel approach to this task that leverages generic resources, and without using any task-specific parallel (source--target) data outpe
Abstract Language models trained on billions of tokens have recently led to unprecedented results on many NLP tasks. This success raises the question of whether, in principle, a system can ever understand'' raw text without access to some form of gro