Do you want to publish a course? Click here

How to Maximize the Spread of Social Influence: A Survey

313   0   0.0 ( 0 )
 Added by Giuseppe De Nittis
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

This survey presents the main results achieved for the influence maximization problem in social networks. This problem is well studied in the literature and, thanks to its recent applications, some of which currently deployed on the field, it is receiving more and more attention in the scientific community. The problem can be formulated as follows: given a graph, with each node having a certain probability of influencing its neighbors, select a subset of vertices so that the number of nodes in the network that are influenced is maximized. Starting from this model, we introduce the main theoretical developments and computational results that have been achieved, taking into account different diffusion models describing how the information spreads throughout the network, various ways in which the sources of information could be placed, and how to tackle the problem in the presence of uncertainties affecting the network. Finally, we present one of the main application that has been developed and deployed exploiting tools and techniques previously discussed.



rate research

Read More

The massive spread of digital misinformation has been identified as a major global risk and has been alleged to influence elections and threaten democracies. Communication, cognitive, social, and computer scientists are engaged in efforts to study the complex causes for the viral diffusion of misinformation online and to develop solutions, while search and social media platforms are beginning to deploy countermeasures. With few exceptions, these efforts have been mainly informed by anecdotal evidence rather than systematic data. Here we analyze 14 million messages spreading 400 thousand articles on Twitter during and following the 2016 U.S. presidential campaign and election. We find evidence that social bots played a disproportionate role in amplifying low-credibility content. Accounts that actively spread articles from low-credibility sources are significantly more likely to be bots. Automated accounts are particularly active in amplifying content in the very early spreading moments, before an article goes viral. Bots also target users with many followers through replies and mentions. Humans are vulnerable to this manipulation, retweeting bots who post links to low-credibility content. Successful low-credibility sources are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.
In this paper, we consider a dataset comprising press releases about health research from different universities in the UK along with a corresponding set of news articles. First, we do an exploratory analysis to understand how the basic information published in the scientific journals get exaggerated as they are reported in these press releases or news articles. This initial analysis shows that some news agencies exaggerate almost 60% of the articles they publish in the health domain; more than 50% of the press releases from certain universities are exaggerated; articles in topics like lifestyle and childhood are heavily exaggerated. Motivated by the above observation we set the central objective of this paper to investigate how exaggerated news spreads over an online social network like Twitter. The LIWC analysis points to a remarkable observation these late tweets are essentially laden in words from opinion and realize categories which indicates that, given sufficient time, the wisdom of the crowd is actually able to tell apart the exaggerated news. As a second step we study the characteristics of the users who never or rarely post exaggerated news content and compare them with those who post exaggerated news content more frequently. We observe that the latter class of users have less retweets or mentions per tweet, have significantly more number of followers, use more slang words, less hyperbolic words and less word contractions. We also observe that the LIWC categories like bio, health, body and negative emotion are more pronounced in the tweets posted by the users in the latter class. As a final step we use these observations as features and automatically classify the two groups achieving an F1 score of 0.83.
Online social networks are used to diffuse opinions and ideas among users, enabling a faster communication and a wider audience. The way in which opinions are conditioned by social interactions is usually called social influence. Social influence is extensively used during political campaigns to advertise and support candidates. Herein we consider the problem of exploiting social influence in a network of voters in order to change their opinion about a target candidate with the aim of increasing his chance to win/lose the election in a wide range of voting systems. We introduce the Linear Threshold Ranking, a natural and powerful extension of the well-established Linear Threshold Model, which describes the change of opinions taking into account the amount of exercised influence. We are able to maximize the score of a target candidate up to a factor of $1-1/e$ by showing submodularity. We exploit such property to provide a $frac{1}{3}(1-1/e)$-approximation algorithm for the constructive election control problem. Similarly, we get a $frac{1}{2}(1-1/e)$-approximation ratio in the destructive scenario. The algorithm can be used in arbitrary scoring rule voting systems, including plurality rule and borda count. Finally, we perform an experimental study on real-world networks, measuring Probability of Victory (PoV) and Margin of Victory (MoV) of the target candidate, to validate the model and to test the capability of the algorithm.
Users of Online Social Networks (OSNs) interact with each other more than ever. In the context of a public discussion group, people receive, read, and write comments in response to articles and postings. In the absence of access control mechanisms, OSNs are a great environment for attackers to influence others, from spreading phishing URLs, to posting fake news. Moreover, OSN user behavior can be predicted by social science concepts which include conformity and the bandwagon effect. In this paper, we show how social recommendation systems affect the occurrence of malicious URLs on Facebook. We exploit temporal features to build a prediction framework, having greater than 75% accuracy, to predict whether the following group users behavior will increase or not. Included in this work, we demarcate classes of URLs, including those malicious URLs classified as creating critical damage, as well as those of a lesser nature which only inflict light damage such as aggressive commercial advertisements and spam content. It is our hope that the data and analyses in this paper provide a better understanding of OSN user reactions to different categories of malicious URLs, thereby providing a way to mitigate the influence of these malicious URL attacks.
The operation of adding edges has been frequently used to the study of opinion dynamics in social networks for various purposes. In this paper, we consider the edge addition problem for the DeGroot model of opinion dynamics in a social network with $n$ nodes and $m$ edges, in the presence of a small number $s ll n$ of competing leaders with binary opposing opinions 0 or 1. Concretely, we pose and investigate the problem of maximizing the equilibrium overall opinion by creating $k$ new edges in a candidate edge set, where each edge is incident to a 1-valued leader and a follower node. We show that the objective function is monotone and submodular. We then propose a simple greedy algorithm with an approximation factor $(1-frac{1}{e})$ that approximately solves the problem in $O(n^3)$ time. Moreover, we provide a fast algorithm with a $(1-frac{1}{e}-epsilon)$ approximation ratio and $tilde{O}(mkepsilon^{-2})$ time complexity for any $epsilon>0$, where $tilde{O}(cdot)$ notation suppresses the ${rm poly} (log n)$ factors. Extensive experiments demonstrate that our second approximate algorithm is efficient and effective, which scales to large networks with more than a million nodes.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا