ﻻ يوجد ملخص باللغة العربية
The understanding of an offense is subjective and people may have different opinions about the offensiveness of a comment. Moreover, offenses and hate speech may occur through sarcasm, which hides the real intention of the comment and makes the decision of the annotators more confusing. Therefore, providing a well-structured annotation process is crucial to a better understanding of hate speech and offensive language phenomena, as well as supplying better performance for machine learning classifiers. In this paper, we describe a corpus annotation process proposed by a linguist, a hate speech specialist, and machine learning engineers in order to support the identification of hate speech and offensive language on social media. In addition, we provide the first robust dataset of this kind for the Brazilian Portuguese language. The corpus was collected from Instagram posts of political personalities and manually annotated, being composed by 7,000 annotated documents according to three different layers: a binary classification (offensive versus non-offensive language), the level of offense (highly offensive, moderately offensive, and slightly offensive messages), and the identification regarding the target of the discriminatory content (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology to the dictatorship, antisemitism, and fatphobia). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. The proposed annotation approach is also language and domain-independent nevertheless it is currently customized for Brazilian Portuguese.
The 2020 US Elections have been, more than ever before, characterized by social media campaigns and mutual accusations. We investigate in this paper if this manifests also in online communication of the supporters of the candidates Biden and Trump, b
The exponential growths of social media and micro-blogging sites not only provide platforms for empowering freedom of expressions and individual voices, but also enables people to express anti-social behaviour like online harassment, cyberbullying, a
Hate Speech has become a major content moderation issue for online social media platforms. Given the volume and velocity of online content production, it is impossible to manually moderate hate speech related content on any platform. In this paper we
With growing role of social media in shaping public opinions and beliefs across the world, there has been an increased attention to identify and counter the problem of hate speech on social media. Hate speech on online spaces has serious manifestatio
Many specialized domains remain untouched by deep learning, as large labeled datasets require expensive expert annotators. We address this bottleneck within the legal domain by introducing the Contract Understanding Atticus Dataset (CUAD), a new data