ترغب بنشر مسار تعليمي؟ اضغط هنا

Brexit and bots: characterizing the behaviour of automated accounts on Twitter during the UK election

64   0   0.0 ( 0 )
 نشر من قبل Matteo Bruno
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Online Social Networks represent a novel opportunity for political campaigns, revolutionising the paradigm of political communication. Nevertheless, many studies uncovered the presence of d/misinformation campaigns or of malicious activities by genuine or automated users, putting at severe risk the credibility of online platforms. This phenomenon is particularly evident during crucial political events, as political elections. In the present paper, we provide a comprehensive description of the structure of the networks of interactions among users and bots during the UK elections of 2019. In particular, we focus on the polarised discussion about Brexit on Twitter analysing a data set made of more than 10 million tweets posted for over a month. We found that the presence of automated accounts fostered the debate particularly in the days before the UK national elections, in which we find a steep increase of bots in the discussion; in the days after the election day, their incidence returned to values similar to the ones observed few weeks before the elections. On the other hand, we found that the number of suspended users (i.e. accounts that were removed by the platform for some violation of the Twitter policy) remained constant until the election day, after which it reached significantly higher values. Remarkably, after the TV debate between Boris Johnson and Jeremy Corbyn, we observed the injection of a large number of novel bots whose behaviour is markedly different from that of pre-existing ones. Finally, we explored the bots stance, finding that their activity is spread across the whole political spectrum, although in different proportions, and we studied the different usage of hashtags by automated accounts and suspended users, thus targeting the formation of common narratives in different sides of the debate.



قيم البحث

اقرأ أيضاً

It is a widely accepted fact that state-sponsored Twitter accounts operated during the 2016 US presidential election spreading millions of tweets with misinformation and inflammatory political content. Whether these social media campaigns of the so-c alled troll accounts were able to manipulate public opinion is still in question. Here we aim to quantify the influence of troll accounts and the impact they had on Twitter by analyzing 152.5 million tweets from 9.9 million users, including 822 troll accounts. The data collected during the US election campaign, contain original troll tweets before they were deleted by Twitter. From these data, we constructed a very large interaction graph; a directed graph of 9.3 million nodes and 169.9 million edges. Recently, Twitter released datasets on the misinformation campaigns of 8,275 state-sponsored accounts linked to Russia, Iran and Venezuela as part of the investigation on the foreign interference in the 2016 US election. These data serve as ground-truth identifier of troll users in our dataset. Using graph analysis techniques we qualify the diffusion cascades of web and media context that have been shared by the troll accounts. We present strong evidence that authentic users were the source of the viral cascades. Although the trolls were participating in the viral cascades, they did not have a leading role in them and only four troll accounts were truly influential.
An infodemic is an emerging phenomenon caused by an overabundance of information online. This proliferation of information makes it difficult for the public to distinguish trustworthy news and credible information from untrustworthy sites and non-cre dible sources. The perils of an infodemic debuted with the outbreak of the COVID-19 pandemic and bots (i.e., automated accounts controlled by a set of algorithms) that are suspected of spreading the infodemic. Although previous research has revealed that bots played a central role in spreading misinformation during major political events, how bots behaved during the infodemic is unclear. In this paper, we examined the roles of bots in the case of the COVID-19 infodemic and the diffusion of non-credible information such as 5G and Bill Gates conspiracy theories and content related to Trump and WHO by analyzing retweet networks and retweeted items. We show the segregated topology of their retweet networks, which indicates that right-wing self-media accounts and conspiracy theorists may lead to this opinion cleavage, while malicious bots might favor amplification of the diffusion of non-credible information. Although the basic influence of information diffusion could be larger in human users than bots, the effects of bots are non-negligible under an infodemic situation.
Today, an estimated 75% of the British public access information about politics and public life online, and 40% do so via social media. With this context in mind, we investigate information sharing patterns over social media in the lead-up to the 201 9 UK General Elections, and ask: (1) What type of political news and information were social media users sharing on Twitter ahead of the vote? (2) How much of it is extremist, sensationalist, or conspiratorial junk news? (3) How much public engagement did these sites get on Facebook in the weeks leading and (4) What are the most common narratives and themes relayed by junk news outlets
The dynamics and influence of fake news on Twitter during the 2016 US presidential election remains to be clarified. Here, we use a dataset of 171 million tweets in the five months preceding the election day to identify 30 million tweets, from 2.2 mi llion users, which contain a link to news outlets. Based on a classification of news outlets curated by www.opensources.co, we find that 25% of these tweets spread either fake or extremely biased news. We characterize the networks of information flow to find the most influential spreaders of fake and traditional news and use causal modeling to uncover how fake news influenced the presidential election. We find that, while top influencers spreading traditional center and left leaning news largely influence the activity of Clinton supporters, this causality is reversed for the fake news: the activity of Trump supporters influences the dynamics of the top fake news spreaders.
Recent research has shown a substantial active presence of bots in online social networks (OSNs). In this paper we utilise our past work on studying bots (Stweeler) to comparatively analyse the usage and impact of bots and humans on Twitter, one of t he largest OSNs in the world. We collect a large-scale Twitter dataset and define various metrics based on tweet metadata. We divide and filter the dataset in four popularity groups in terms of number of followers. Using a human annotation task we assign bot and human ground-truth labels to the dataset, and compare the annotations against an online bot detection tool for evaluation. We then ask a series of questions to discern important behavioural bot and human characteristics using metrics within and among four popularity groups. From the comparative analysis we draw important differences as well as surprising similarities between the two entities, thus paving the way for reliable classification of automated political infiltration, advertisement campaigns, and general bot detection.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا