ترغب بنشر مسار تعليمي؟ اضغط هنا

Disabling External Influence in Social Networks via Edge Recommendation

256   0   0.0 ( 0 )
 نشر من قبل Victor Amelkin
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Existing socio-psychological studies suggest that users of a social network form their opinions relying on the opinions of their neighbors. According to DeGroot opinion formation model, one value of particular importance is the asymptotic consensus value---the sum of user opinions weighted by the users eigenvector centralities. This value plays the role of an attractor for the opinions in the network and is a lucrative target for external influence. However, since any potentially malicious control of the opinion distribution in a social network is clearly undesirable, it is important to design methods to prevent the external attempts to strategically change the asymptotic consensus value. In this work, we assume that the adversary wants to maximize the asymptotic consensus value by altering the opinions of some users in a network; we, then, state DIVER---an NP-hard problem of disabling such external influence attempts by strategically adding a limited number of edges to the network. Relying on the theory of Markov chains, we provide perturbation analysis that shows how eigenvector centrality and, hence, DIVERs objective function change in response to an edges addition to the network. The latter leads to the design of a pseudo-linear-time heuristic for DIVER, whose computation relies on efficient estimation of mean first passage times in a Markov chain. We confirm our theoretical findings in experiments.



قيم البحث

اقرأ أيضاً

102 - Chen Feng , Luoyi Fu , Bo Jiang 2020
Influence maximization (IM) aims at maximizing the spread of influence by offering discounts to influential users (called seeding). In many applications, due to users privacy concern, overwhelming network scale etc., it is hard to target any user in the network as one wishes. Instead, only a small subset of users is initially accessible. Such access limitation would significantly impair the influence spread, since IM often relies on seeding high degree users, which are particularly rare in such a small subset due to the power-law structure of social networks. In this paper, we attempt to solve the limited IM in real-world scenarios by the adaptive approach with seeding and diffusion uncertainty considered. Specifically, we consider fine-grained discounts and assume users accept the discount probabilistically. The diffusion process is depicted by the independent cascade model. To overcome the access limitation, we prove the set-wise friendship paradox (FP) phenomenon that neighbors have higher degree in expectation, and propose a two-stage seeding model with the FP embedded, where neighbors are seeded. On this basis, for comparison we formulate the non-adaptive case and adaptive case, both proven to be NP-hard. In the non-adaptive case, discounts are allocated to users all at once. We show the monotonicity of influence spread w.r.t. discount allocation and design a two-stage coordinate descent framework to decide the discount allocation. In the adaptive case, users are sequentially seeded based on observations of existing seeding and diffusion results. We prove the adaptive submodularity and submodularity of the influence spread function in two stages. Then, a series of adaptive greedy algorithms are proposed with constant approximation ratio.
Analysis of opinion dynamics in social networks plays an important role in todays life. For applications such as predicting users political preference, it is particularly important to be able to analyze the dynamics of competing opinions. While obser ving the evolution of polar opinions of a social networks users over time, can we tell when the network behaved abnormally? Furthermore, can we predict how the opinions of the users will change in the future? Do opinions evolve according to existing network opinion dynamics models? To answer such questions, it is not sufficient to study individual user behavior, since opinions can spread far beyond users egonets. We need a method to analyze opinion dynamics of all network users simultaneously and capture the effect of individuals behavior on the global evolution pattern of the social network. In this work, we introduce Social Network Distance (SND) - a distance measure that quantifies the cost of evolution of one snapshot of a social network into another snapshot under various models of polar opinion propagation. SND has a rich semantics of a transportation problem, yet, is computable in time linear in the number of users, which makes SND applicable to the analysis of large-scale online social networks. In our experiments with synthetic and real-world Twitter data, we demonstrate the utility of our distance measure for anomalous event detection. It achieves a true positive rate of 0.83, twice as high as that of alternatives. When employed for opinion prediction in Twitter, our methods accuracy is 75.63%, which is 7.5% higher than that of the next best method. Source Code: https://cs.ucsb.edu/~victor/pub/ucsb/dbl/snd/
Influence overlap is a universal phenomenon in influence spreading for social networks. In this paper, we argue that the redundant influence generated by influence overlap cause negative effect for maximizing spreading influence. Firstly, we present a theoretical method to calculate the influence overlap and record the redundant influence. Then in term of eliminating redundant influence, we present two algorithms, namely, Degree-Redundant-Influence (DRS) and Degree-Second-Neighborhood (DSN) for multiple spreaders identification. The experiments for four empirical social networks successfully verify the methods, and the spreaders selected by the DSN algorithm show smaller degree and k-core values.
While social networks are widely used as a media for information diffusion, attackers can also strategically employ analytical tools, such as influence maximization, to maximize the spread of adversarial content through the networks. We investigate t he problem of limiting the diffusion of negative information by blocking nodes and edges in the network. We formulate the interaction between the defender and the attacker as a Stackelberg game where the defender first chooses a set of nodes to block and then the attacker selects a set of seeds to spread negative information from. This yields an extremely complex bi-level optimization problem, particularly since even the standard influence measures are difficult to compute. Our approach is to approximate the attackers problem as the maximum node domination problem. To solve this problem, we first develop a method based on integer programming combined with constraint generation. Next, to improve scalability, we develop an approximate solution method that represents the attackers problem as an integer program, and then combines relaxation with duality to yield an upper bound on the defenders objective that can be computed using mixed integer linear programming. Finally, we propose an even more scalable heuristic method that prunes nodes from the consideration set based on their degree. Extensive experiments demonstrate the efficacy of our approaches.
Social networks play a fundamental role in the diffusion of information. However, there are two different ways of how information reaches a person in a network. Information reaches us through connections in our social networks, as well as through the influence of external out-of-network sources, like the mainstream media. While most present models of information adoption in networks assume information only passes from a node to node via the edges of the underlying network, the recent availability of massive online social media data allows us to study this process in more detail. We present a model in which information can reach a node via the links of the social network or through the influence of external sources. We then develop an efficient model parameter fitting technique and apply the model to the emergence of URL mentions in the Twitter network. Using a complete one month trace of Twitter we study how information reaches the nodes of the network. We quantify the external influences over time and describe how these influences affect the information adoption. We discover that the information tends to jump across the network, which can only be explained as an effect of an unobservable external influence on the network. We find that only about 71% of the information volume in Twitter can be attributed to network diffusion, and the remaining 29% is due to external events and factors outside the network.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا