No Arabic abstract
An important challenge in the process of tracking and detecting the dissemination of misinformation is to understand the political gap between people that engage with the so called fake news. A possible factor responsible for this gap is opinion polarization, which may prompt the general public to classify content that they disagree or want to discredit as fake. In this work, we study the relationship between political polarization and content reported by Twitter users as related to fake news. We investigate how polarization may create distinct narratives on what misinformation actually is. We perform our study based on two datasets collected from Twitter. The first dataset contains tweets about US politics in general, from which we compute the degree of polarization of each user towards the Republican and Democratic Party. In the second dataset, we collect tweets and URLs that co-occurred with fake news related keywords and hashtags, such as #FakeNews and #AlternativeFact, as well as reactions towards such tweets and URLs. We then analyze the relationship between polarization and what is perceived as misinformation, and whether users are designating information that they disagree as fake. Our results show an increase in the polarization of users and URLs associated with fake-news keywords and hashtags, when compared to information not labeled as fake news. We discuss the impact of our findings on the challenges of tracking fake news in the ongoing battle against misinformation.
The spreading of unsubstantiated rumors on online social networks (OSN) either unintentionally or intentionally (e.g., for political reasons or even trolling) can have serious consequences such as in the recent case of rumors about Ebola causing disruption to health-care workers. Here we show that indicators aimed at quantifying information consumption patterns might provide important insights about the virality of false claims. In particular, we address the driving forces behind the popularity of contents by analyzing a sample of 1.2M Facebook Italian users consuming different (and opposite) types of information (science and conspiracy news). We show that users engagement across different contents correlates with the number of friends having similar consumption patterns (homophily), indicating the area in the social network where certain types of contents are more likely to spread. Then, we test diffusion patterns on an external sample of $4,709$ intentional satirical false claims showing that neither the presence of hubs (structural properties) nor the most active users (influencers) are prevalent in viral phenomena. Instead, we found out that in an environment where misinformation is pervasive, users aggregation around shared beliefs may make the usual exposure to conspiracy stories (polarization) a determinant for the virality of false information.
Political polarization appears to be on the rise, as measured by voting behavior, general affect towards opposing partisans and their parties, and contents posted and consumed online. Research over the years has focused on the role of the Web as a driver of polarization. In order to further our understanding of the factors behind online polarization, in the present work we collect and analyze Web browsing histories of tens of thousands of users alongside careful measurements of the time spent browsing various news sources. We show that online news consumption follows a polarized pattern, where users visits to news sources aligned with their own political leaning are substantially longer than their visits to other news sources. Next, we show that such preferences hold at the individual as well as the population level, as evidenced by the emergence of clear partisan communities of news domains from aggregated browsing patterns. Finally, we tackle the important question of the role of user choices in polarization. Are users simply following the links proffered by their Web environment, or do they exacerbate partisan polarization by intentionally pursuing like-minded news sources? To answer this question, we compare browsing patterns with the underlying hyperlink structure spanned by the considered news domains, finding strong evidence of polarization in partisan browsing habits beyond that which can be explained by the hyperlink structure of the Web.
Online debates are often characterised by extreme polarisation and heated discussions among users. The presence of hate speech online is becoming increasingly problematic, making necessary the development of appropriate countermeasures. In this work, we perform hate speech detection on a corpus of more than one million comments on YouTube videos through a machine learning model fine-tuned on a large set of hand-annotated data. Our analysis shows that there is no evidence of the presence of serial haters, intended as active users posting exclusively hateful comments. Moreover, coherently with the echo chamber hypothesis, we find that users skewed towards one of the two categories of video channels (questionable, reliable) are more prone to use inappropriate, violent, or hateful language within their opponents community. Interestingly, users loyal to reliable sources use on average a more toxic language than their counterpart. Finally, we find that the overall toxicity of the discussion increases with its length, measured both in terms of number of comments and time. Our results show that, coherently with Godwins law, online debates tend to degenerate towards increasingly toxic exchanges of views.
The global COVID-19 pandemic has led to the online proliferation of health-, political-, and conspiratorial-based misinformation. Understanding the reach and belief in this misinformation is vital to managing this crisis, as well as future crises. The results from our global survey finds a troubling reach of and belief in COVID-related misinformation, as well as a correlation with those that primarily consume news from social media, and, in the United States, a strong correlation with political leaning.
The digital spread of misinformation is one of the leading threats to democracy, public health, and the global economy. Popular strategies for mitigating misinformation include crowdsourcing, machine learning, and media literacy programs that require social media users to classify news in binary terms as either true or false. However, research on peer influence suggests that framing decisions in binary terms can amplify judgment errors and limit social learning, whereas framing decisions in probabilistic terms can reliably improve judgments. In this preregistered experiment, we compare online peer networks that collaboratively evaluate the veracity of news by communicating either binary or probabilistic judgments. Exchanging probabilistic estimates of news veracity substantially improved individual and group judgments, with the effect of eliminating polarization in news evaluation. By contrast, exchanging binary classifications reduced social learning and entrenched polarization. The benefits of probabilistic social learning are robust to participants education, gender, race, income, religion, and partisanship.