Do you want to publish a course? Click here

Characterizing Online Engagement with Disinformation and Conspiracies in the 2020 U.S. Presidential Election

243   0   0.0 ( 0 )
 Added by Karishma Sharma
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Identifying and characterizing disinformation in political discourse on social media is critical to ensure the integrity of elections and democratic processes around the world. Persistent manipulation of social media has resulted in increased concerns regarding the 2020 U.S. Presidential Election, due to its potential to influence individual opinions and social dynamics. In this work, we focus on the identification of distorted facts, in the form of unreliable and conspiratorial narratives in election-related tweets, to characterize discourse manipulation prior to the election. We apply a detection model to separate factual from unreliable (or conspiratorial) claims analyzing a dataset of 242 million election-related tweets. The identified claims are used to investigate targeted topics of disinformation, and conspiracy groups, most notably the far-right QAnon conspiracy group. Further, we characterize account engagements with unreliable and conspiracy tweets, and with the QAnon conspiracy group, by political leaning and tweet types. Finally, using a regression discontinuity design, we investigate whether Twitters actions to curb QAnon activity on the platform were effective, and how QAnon accounts adapt to Twitters restrictions.



rate research

Read More

The dynamics and influence of fake news on Twitter during the 2016 US presidential election remains to be clarified. Here, we use a dataset of 171 million tweets in the five months preceding the election day to identify 30 million tweets, from 2.2 million users, which contain a link to news outlets. Based on a classification of news outlets curated by www.opensources.co, we find that 25% of these tweets spread either fake or extremely biased news. We characterize the networks of information flow to find the most influential spreaders of fake and traditional news and use causal modeling to uncover how fake news influenced the presidential election. We find that, while top influencers spreading traditional center and left leaning news largely influence the activity of Clinton supporters, this causality is reversed for the fake news: the activity of Trump supporters influences the dynamics of the top fake news spreaders.
We use a method based on machine learning, big-data analytics, and network theory to process millions of messages posted in Twitter to predict election outcomes. The model has achieved accurate results in the current Argentina primary presidential election on August 11, 2019 by predicting the large difference win of candidate Alberto Fernandez over president Mauricio Macri; a result that none of the traditional pollsters in that country was able to predict, and has led to a major bond market collapse. We apply the model to the upcoming Argentina presidential election on October 27, 2019 yielding the following results: Fernandez 47.5%, Macri 30.9% and third party 21.6%. Our method improves over traditional polling methods which are based on direct interactions with small number of individuals that are plagued by ever declining response rates, currently falling in the low single digits. They provide a reliable polling method that can be applied not only to predict elections but to discover any trend in society, for instance, what people think about climate change, politics or education.
It is a widely accepted fact that state-sponsored Twitter accounts operated during the 2016 US presidential election spreading millions of tweets with misinformation and inflammatory political content. Whether these social media campaigns of the so-called troll accounts were able to manipulate public opinion is still in question. Here we aim to quantify the influence of troll accounts and the impact they had on Twitter by analyzing 152.5 million tweets from 9.9 million users, including 822 troll accounts. The data collected during the US election campaign, contain original troll tweets before they were deleted by Twitter. From these data, we constructed a very large interaction graph; a directed graph of 9.3 million nodes and 169.9 million edges. Recently, Twitter released datasets on the misinformation campaigns of 8,275 state-sponsored accounts linked to Russia, Iran and Venezuela as part of the investigation on the foreign interference in the 2016 US election. These data serve as ground-truth identifier of troll users in our dataset. Using graph analysis techniques we qualify the diffusion cascades of web and media context that have been shared by the troll accounts. We present strong evidence that authentic users were the source of the viral cascades. Although the trolls were participating in the viral cascades, they did not have a leading role in them and only four troll accounts were truly influential.
Online debates are often characterised by extreme polarisation and heated discussions among users. The presence of hate speech online is becoming increasingly problematic, making necessary the development of appropriate countermeasures. In this work, we perform hate speech detection on a corpus of more than one million comments on YouTube videos through a machine learning model fine-tuned on a large set of hand-annotated data. Our analysis shows that there is no evidence of the presence of serial haters, intended as active users posting exclusively hateful comments. Moreover, coherently with the echo chamber hypothesis, we find that users skewed towards one of the two categories of video channels (questionable, reliable) are more prone to use inappropriate, violent, or hateful language within their opponents community. Interestingly, users loyal to reliable sources use on average a more toxic language than their counterpart. Finally, we find that the overall toxicity of the discussion increases with its length, measured both in terms of number of comments and time. Our results show that, coherently with Godwins law, online debates tend to degenerate towards increasingly toxic exchanges of views.
With the ubiquitous reach of social media, influencers are increasingly central to articulation of political agendas on a range of topics. We curate a sample of tweets from the 200 most followed sportspersons in India and the United States respectively since 2019, map their connections with politicians, and visualize their engagements with key topics online. We find significant differences between the ways in which Indian and US sportspersons engage with politics online-while leading Indian sportspersons tend to align closely with the ruling party and engage minimally in dissent, American sportspersons engage with a range of political issues and are willing to publicly criticize politicians or policy. Our findings suggest that the ownership and governmental control of sports impact public stances on issues that professional sportspersons are willing to engage in online. It might also be inferred, depending upon the government of the day, that the costs of speaking up against the state and the government in power have different socio-economic costs in the US and India.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا