أخبار وهمية تسبب أضرارا كبيرة في المجتمع.للتعامل مع هذه الأخبار المزيفة، تم إجراء العديد من الدراسات حول نماذج كشف البناء وترتيب مجموعات البيانات.معظم مجموعات بيانات الأخبار المزيفة تعتمد على فترة زمنية محددة.وبالتالي، فإن نماذج الكشف المدربة على مثل هذه البيانات لديها صعوبة في اكتشاف الأخبار الرواية المزيفة الناتجة عن التغييرات السياسية والتغيرات الاجتماعية؛قد ينتج عنهم إخراج متحيز من المدخلات، بما في ذلك أسماء شخص معين وأسماء تنظيمية.نشير إلى هذه المشكلة كتحيز DIACHRONIC لأنه سبب تاريخ إنشاء الأخبار في كل مجموعة بيانات.في هذه الدراسة، نؤكد التحيز، وخاصة الأسماء المناسبة بما في ذلك أسماء الشخص، من انحراف مظاهر العبارة في كل مجموعة بيانات.بناء على هذه النتائج، نقترح طرق الاخفاء باستخدام Wikidata للتخفيف من تأثير أسماء الشخص والتحقق من صحة ما إذا كانوا يقومون بإجراء نماذج الكشف عن الأخبار وهمية قوية من خلال التجارب مع بيانات داخل المجال والخروج.
Fake news causes significant damage to society. To deal with these fake news, several studies on building detection models and arranging datasets have been conducted. Most of the fake news datasets depend on a specific time period. Consequently, the detection models trained on such a dataset have difficulty detecting novel fake news generated by political changes and social changes; they may possibly result in biased output from the input, including specific person names and organizational names. We refer to this problem as Diachronic Bias because it is caused by the creation date of news in each dataset. In this study, we confirm the bias, especially proper nouns including person names, from the deviation of phrase appearances in each dataset. Based on these findings, we propose masking methods using Wikidata to mitigate the influence of person names and validate whether they make fake news detection models robust through experiments with in-domain and out-of-domain data.
References used
https://aclanthology.org/
Existing techniques for mitigating dataset bias often leverage a biased model to identify biased instances. The role of these biased instances is then reduced during the training of the main model to enhance its robustness to out-of-distribution data
In this paper, we study ethnic bias and how it varies across languages by analyzing and mitigating ethnic bias in monolingual BERT for English, German, Spanish, Korean, Turkish, and Chinese. To observe and quantify ethnic bias, we develop a novel met
Fine-tuned language models have been shown to exhibit biases against protected groups in a host of modeling tasks such as text classification and coreference resolution. Previous works focus on detecting these biases, reducing bias in data representa
As the world continues to fight the COVID-19 pandemic, it is simultaneously fighting an infodemic' -- a flood of disinformation and spread of conspiracy theories leading to health threats and the division of society. To combat this infodemic, there i
The use of euphemisms is a known driver of language change. It has been proposed that women use euphemisms more than men. Although there have been several studies investigating gender differences in language, the claim about euphemism usage has not b