يستخدم تعليم التمثيل على نطاق واسع في NLP لمجموعة واسعة من المهام.ومع ذلك، غالبا ما تعكس التمثيلات المستمدة من Text Corpora التحيزات الاجتماعية.هذه الظاهرة منتشرة ومتسقة عبر نماذج عصبية مختلفة، مما تسبب في قلق شديد.تعتمد الأساليب السابقة في الغالب على اتجاه محدد مسبقا أو مقدم من المستخدم أو يعاني من التدريب غير المستقر.في هذه الورقة، نقترح نموذجا للدوائر المنفذة من الخصومة إلى Decouple Decouple Socied Socials من التمثيلات المتوسطة المدربة على المهمة الرئيسية.نحن نهدف إلى Denoise معلومات التحيز أثناء التدريب على مهمة المصب، بدلا من إزالة التحيز الاجتماعي ومتابعة التمثيلات غير المتحيزة الثابتة.تظهر التجارب فعالية طريقتنا، سواء على تأثير الدخل وأداء المهمة الرئيسية.
Representation learning is widely used in NLP for a vast range of tasks. However, representations derived from text corpora often reflect social biases. This phenomenon is pervasive and consistent across different neural models, causing serious concern. Previous methods mostly rely on a pre-specified, user-provided direction or suffer from unstable training. In this paper, we propose an adversarial disentangled debiasing model to dynamically decouple social bias attributes from the intermediate representations trained on the main task. We aim to denoise bias information while training on the downstream task, rather than completely remove social bias and pursue static unbiased representations. Experiments show the effectiveness of our method, both on the effect of debiasing and the main task performance.
References used
https://aclanthology.org/
Dialogue policy optimisation via reinforcement learning requires a large number of training interactions, which makes learning with real users time consuming and expensive. Many set-ups therefore rely on a user simulator instead of humans. These user
Spoken language understanding, usually including intent detection and slot filling, is a core component to build a spoken dialog system. Recent research shows promising results by jointly learning of those two tasks based on the fact that slot fillin
Recently, the textual adversarial attack models become increasingly popular due to their successful in estimating the robustness of NLP models. However, existing works have obvious deficiencies. (1)They usually consider only a single granularity of m
Continual learning in task-oriented dialogue systems allows the system to add new domains and functionalities overtime after deployment, without incurring the high cost of retraining the whole system each time. In this paper, we propose a first-ever
Abstract Task-oriented dialog (TOD) systems often need to formulate knowledge base (KB) queries corresponding to the user intent and use the query results to generate system responses. Existing approaches require dialog datasets to explicitly annotat