No Arabic abstract
The moderation of content in many social media systems, such as Twitter and Facebook, motivated the emergence of a new social network system that promotes free speech, named Gab. Soon after that, Gab has been removed from Google Play Store for violating the companys hate speech policy and it has been rejected by Apple for similar reasons. In this paper we characterize Gab, aiming at understanding who are the users who joined it and what kind of content they share in this system. Our findings show that Gab is a very politically oriented system that hosts banned users from other social networks, some of them due to possible cases of hate speech and association with extremism. We provide the first measurement of news dissemination inside a right-leaning echo chamber, investigating a social media where readers are rarely exposed to content that cuts across ideological lines, but rather are fed with content that reinforces their current political or social views.
Echo chambers may exclude social media users from being exposed to other opinions, therefore, can cause rampant negative effects. Among abundant evidence are the 2016 and 2020 US presidential elections conspiracy theories and polarization, as well as the COVID-19 disinfodemic. To help better detect echo chambers and mitigate its negative effects, this paper explores the mechanisms and attributes of echo chambers in social media. In particular, we first illustrate four primary mechanisms related to three main factors: human psychology, social networks, and automatic systems. We then depict common attributes of echo chambers with a focus on the diffusion of misinformation, spreading of conspiracy theory, creation of social trends, political polarization, and emotional contagion of users. We illustrate each mechanism and attribute in a multi-perspective of sociology, psychology, and social computing with recent case studies. Our analysis suggest an emerging need to detect echo chambers and mitigate their negative effects.
Social media has become an important venue for diverse groups to share information, discuss political issues, and organize social movements. Recent scholarship has shown that the social media ecosystem can affect political thinking and expression. Individuals and groups across the political spectrum have engaged in the use of these platforms extensively, even creating their own forums with varying approaches to content moderation in pursuit of freer standards of speech. The Gab social media platform arose in this context. Gab is a social media platform for the so-called alt right, and much of the popular press has opined about the thematic content of discourses on Gab and platforms like it, but little research has examined the content itself. Using a publicly available dataset of all Gab posts from August 2016 until July 2019, the current paper explores a five percent random sample of this dataset to explore thematic content on the platform. We run multiple structural topic models, using standard procedures to arrive at an optimal k number of topics. The final model specifies 85 topics for 403,469 documents. We include as prevalence variables whether the source account has been flagged as a bot and the number of followers for the source account. Results suggest the most nodal topics in the dataset pertain to the authenticity of the Holocaust, the meaning of red pill, and the journalistic merit of mainstream media. We conclude by discussing the implications of our findings for work in ethical content moderation, online community development, political polarization, and avenues for future research.
Gab is an online social network often associated with the alt-right political movement and users barred from other networks. It presents an interesting opportunity for research because near-complete data is available from day one of the networks creation. In this paper, we investigate the evolution of the user interaction graph, that is the graph where a link represents a user interacting with another user at a given time. We view this graph both at different times and at different timescales. The latter is achieved by using sliding windows on the graph which gives a novel perspective on social network data. The Gab network is relatively slowly growing over the period of months but subject to large bursts of arrivals over hours and days. We identify plausible events that are of interest to the Gab community associated with the most obvious such bursts. The network is characterised by interactions between `strangers rather than by reinforcing links between `friends. Gab usage follows the diurnal cycle of the predominantly US and Europe based users. At off-peak hours the Gab interaction network fragments into sub-networks with absolutely no interaction between them. A small group of users are highly influential across larger timescales, but a substantial number of users gain influence for short periods of time. Temporal analysis at different timescales gives new insights above and beyond what could be found on static graphs.
Recent studies have shown that online users tend to select information adhering to their system of beliefs, ignore information that does not, and join groups - i.e., echo chambers - around a shared narrative. Although a quantitative methodology for their identification is still missing, the phenomenon of echo chambers is widely debated both at scientific and political level. To shed light on this issue, we introduce an operational definition of echo chambers and perform a massive comparative analysis on more than 1B pieces of contents produced by 1M users on four social media platforms: Facebook, Twitter, Reddit, and Gab. We infer the leaning of users about controversial topics - ranging from vaccines to abortion - and reconstruct their interaction networks by analyzing different features, such as shared links domain, followed pages, follower relationship and commented posts. Our method quantifies the existence of echo-chambers along two main dimensions: homophily in the interaction networks and bias in the information diffusion toward likely-minded peers. We find peculiar differences across social media. Indeed, while Facebook and Twitter present clear-cut echo chambers in all the observed dataset, Reddit and Gab do not. Finally, we test the role of the social media platform on news consumption by comparing Reddit and Facebook. Again, we find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
Most of the information operations involve users who may foster polarization and distrust toward science and mainstream journalism, without these users being conscious of their role. Gab is well known to be an extremist-friendly platform that performs little control on the posted content. Thus it represents an ideal benchmark for studying phenomena potentially related to polarization such as misinformation spreading. The combination of these factors may lead to hate as well as to episodes of harm in the real world. In this work we provide a characterization of the interaction patterns within Gab around the COVID-19 topic. To assess the spreading of different content type, we analyze consumption patterns based on both interaction type and source reliability. Overall we find that there are no strong statistical differences in the social response to questionable and reliable content, both following a power law distribution. However, questionable and reliable sources display structural and topical differences in the use of hashtags. The commenting behaviour of users in terms of both lifetime and sentiment reveals that questionable and reliable posts are perceived in the same manner. We can conclude that despite evident differences between questionable and reliable posts Gab users do not perform such a differentiation thus treating them as a whole. Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.