Do you want to publish a course? Click here

An in-depth characterisation of Bots and Humans on Twitter

402   0   0.0 ( 0 )
 Added by Zafar Gilani
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Recent research has shown a substantial active presence of bots in online social networks (OSNs). In this paper we utilise our past work on studying bots (Stweeler) to comparatively analyse the usage and impact of bots and humans on Twitter, one of the largest OSNs in the world. We collect a large-scale Twitter dataset and define various metrics based on tweet metadata. We divide and filter the dataset in four popularity groups in terms of number of followers. Using a human annotation task we assign bot and human ground-truth labels to the dataset, and compare the annotations against an online bot detection tool for evaluation. We then ask a series of questions to discern important behavioural bot and human characteristics using metrics within and among four popularity groups. From the comparative analysis we draw important differences as well as surprising similarities between the two entities, thus paving the way for reliable classification of automated political infiltration, advertisement campaigns, and general bot detection.



rate research

Read More

Online Social Networks represent a novel opportunity for political campaigns, revolutionising the paradigm of political communication. Nevertheless, many studies uncovered the presence of d/misinformation campaigns or of malicious activities by genuine or automated users, putting at severe risk the credibility of online platforms. This phenomenon is particularly evident during crucial political events, as political elections. In the present paper, we provide a comprehensive description of the structure of the networks of interactions among users and bots during the UK elections of 2019. In particular, we focus on the polarised discussion about Brexit on Twitter analysing a data set made of more than 10 million tweets posted for over a month. We found that the presence of automated accounts fostered the debate particularly in the days before the UK national elections, in which we find a steep increase of bots in the discussion; in the days after the election day, their incidence returned to values similar to the ones observed few weeks before the elections. On the other hand, we found that the number of suspended users (i.e. accounts that were removed by the platform for some violation of the Twitter policy) remained constant until the election day, after which it reached significantly higher values. Remarkably, after the TV debate between Boris Johnson and Jeremy Corbyn, we observed the injection of a large number of novel bots whose behaviour is markedly different from that of pre-existing ones. Finally, we explored the bots stance, finding that their activity is spread across the whole political spectrum, although in different proportions, and we studied the different usage of hashtags by automated accounts and suspended users, thus targeting the formation of common narratives in different sides of the debate.
The popularity of social media platforms such as Twitter has led to the proliferation of automated bots, creating both opportunities and challenges in information dissemination, user engagements, and quality of services. Past works on profiling bots had been focused largely on malicious bots, with the assumption that these bots should be removed. In this work, however, we find many bots that are benign, and propose a new, broader categorization of bots based on their behaviors. This includes broadcast, consumption, and spam bots. To facilitate comprehensive analyses of bots and how they compare to human accounts, we develop a systematic profiling framework that includes a rich set of features and classifier bank. We conduct extensive experiments to evaluate the performances of different classifiers under varying time windows, identify the key features of bots, and infer about bots in a larger Twitter population. Our analysis encompasses more than 159K bot and human (non-bot) accounts in Twitter. The results provide interesting insights on the behavioral traits of both benign and malicious bots.
The Turing test aimed to recognize the behavior of a human from that of a computer algorithm. Such challenge is more relevant than ever in todays social media context, where limited attention and technology constrain the expressive power of humans, while incentives abound to develop software agents mimicking humans. These social bots interact, often unnoticed, with real people in social media ecosystems, but their abundance is uncertain. While many bots are benign, one can design harmful bots with the goals of persuading, smearing, or deceiving. Here we discuss the characteristics of modern, sophisticated social bots, and how their presence can endanger online ecosystems and our society. We then review current efforts to detect social bots on Twitter. Features related to content, network, sentiment, and temporal patterns of activity are imitated by bots but at the same time can help discriminate synthetic behaviors from human ones, yielding signatures of engineered social tampering.
Social media platforms attempting to curb abuse and misinformation have been accused of political bias. We deploy neutral social bots who start following different news sources on Twitter, and track them to probe distinct biases emerging from platform mechanisms versus user interactions. We find no strong or consistent evidence of political bias in the news feed. Despite this, the news and information to which U.S. Twitter users are exposed depend strongly on the political leaning of their early connections. The interactions of conservative accounts are skewed toward the right, whereas liberal accounts are exposed to moderate content shifting their experience toward the political center. Partisan accounts, especially conservative ones, tend to receive more followers and follow more automated accounts. Conservative accounts also find themselves in denser communities and are exposed to more low-credibility content.
Online social networks are often subject to influence campaigns by malicious actors through the use of automated accounts known as bots. We consider the problem of detecting bots in online social networks and assessing their impact on the opinions of individuals. We begin by analyzing the behavior of bots in social networks and identify that they exhibit heterophily, meaning they interact with humans more than other bots. We use this property to develop a detection algorithm based on the Ising model from statistical physics. The bots are identified by solving a minimum cut problem. We show that this Ising model algorithm can identify bots with higher accuracy while utilizing much less data than other state of the art methods. We then develop a a function we call generalized harmonic influence centrality to estimate the impact bots have on the opinions of users in social networks. This function is based on a generalized opinion dynamics model and captures how the activity level and network connectivity of the bots shift equilibrium opinions. To apply generalized harmonic influence centrality to real social networks, we develop a deep neural network to measure the opinions of users based on their social network posts. Using this neural network, we then calculate the generalized harmonic influence centrality of bots in multiple real social networks. For some networks we find that a limited number of bots can cause non-trivial shifts in the population opinions. In other networks, we find that the bots have little impact. Overall we find that generalized harmonic influence centrality is a useful operational tool to measure the impact of bots in social networks.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا