يعيد هذا العمل مهمة اكتشاف الكلمات المتعلقة بالقرار في حوار متعدد الأحزاب.نستكشف أداء نهج تقليدي ونهج عميق قائم على التعلم بناء على نماذج لغة المحولات، مع تقدم الأخير تحسينات متواضعة.ثم نحلل تحريف الموضوع في النماذج باستخدام معلومات الموضوع التي تم الحصول عليها عن طريق التوضيح اليدوي.النتيجة لدينا هي أنه عند اكتشاف بعض الأنواع من القرارات في بياناتنا، تعتمد النماذج أكثر على موضوع الكلمات المحددة التي تدور حولها القرارات بدلا من الكلمات التي تشير بشكل عام إلى اتخاذ القرارات بشكل عام.نستكشف ذلك أيضا عن طريق إزالة معلومات الموضوع من بيانات القطار.نظهر أن هذا يحل قضايا التحيز إلى حد ما، ومدهشا، يعزز في بعض الأحيان.
This work revisits the task of detecting decision-related utterances in multi-party dialogue. We explore performance of a traditional approach and a deep learning-based approach based on transformer language models, with the latter providing modest improvements. We then analyze topic bias in the models using topic information obtained by manual annotation. Our finding is that when detecting some types of decisions in our data, models rely more on topic specific words that decisions are about rather than on words that more generally indicate decision making. We further explore this by removing topic information from the train data. We show that this resolves the bias issues to an extent and, surprisingly, sometimes even boosts performance.
References used
https://aclanthology.org/
Fine-tuned language models have been shown to exhibit biases against protected groups in a host of modeling tasks such as text classification and coreference resolution. Previous works focus on detecting these biases, reducing bias in data representa
Despite the remarkable progress in the field of computational argumentation, dialogue systems concerned with argumentative tasks often rely on structured knowledge about arguments and their relations. Since the manual acquisition of these argument st
In this paper, we study ethnic bias and how it varies across languages by analyzing and mitigating ethnic bias in monolingual BERT for English, German, Spanish, Korean, Turkish, and Chinese. To observe and quantify ethnic bias, we develop a novel met
Dialogue topic segmentation is critical in several dialogue modeling problems. However, popular unsupervised approaches only exploit surface features in assessing topical coherence among utterances. In this work, we address this limitation by leverag
Unlike well-structured text, such as news reports and encyclopedia articles, dialogue content often comes from two or more interlocutors, exchanging information with each other. In such a scenario, the topic of a conversation can vary upon progressio