ترغب بنشر مسار تعليمي؟ اضغط هنا

Centrality Measures for Networks with Community Structure

131   0   0.0 ( 0 )
 نشر من قبل Naveen Gupta
 تاريخ النشر 2016
والبحث باللغة English




اسأل ChatGPT حول البحث

Understanding the network structure, and finding out the influential nodes is a challenging issue in the large networks. Identifying the most influential nodes in the network can be useful in many applications like immunization of nodes in case of epidemic spreading, during intentional attacks on complex networks. A lot of research is done to devise centrality measures which could efficiently identify the most influential nodes in the network. There are two major approaches to the problem: On one hand, deterministic strategies that exploit knowledge about the overall network topology in order to find the influential nodes, while on the other end, random strategies are completely agnostic about the network structure. Centrality measures that can deal with a limited knowledge of the network structure are required. Indeed, in practice, information about the global structure of the overall network is rarely available or hard to acquire. Even if available, the structure of the network might be too large that it is too much computationally expensive to calculate global centrality measures. To that end, a centrality measure is proposed that requires information only at the community level to identify the influential nodes in the network. Indeed, most of the real-world networks exhibit a community structure that can be exploited efficiently to discover the influential nodes. We performed a comparative evaluation of prominent global deterministic strategies together with stochastic strategies with an available and the proposed deterministic community-based strategy. Effectiveness of the proposed method is evaluated by performing experiments on synthetic and real-world networks with community structure in the case of immunization of nodes for epidemic control.

قيم البحث

اقرأ أيضاً

The calculation of centrality measures is common practice in the study of networks, as they attempt to quantify the importance of individual vertices, edges, or other components. Different centralities attempt to measure importance in different ways. In this paper, we examine a conjecture posed by E. Estrada regarding the ability of several measures to distinguish the vertices of networks. Estrada conjectured that if all vertices of a graph have the same subgraph centrality, then all vertices must also have the same degree, eigenvector, closeness, and betweenness centralities. We provide a counterexample for the latter two centrality measures and propose a revised conjecture.
Community detection is a significant and challenging task in network research. Nowadays, plenty of attention has been focused on local methods of community detection. Among them, community detection with a greedy algorithm typically starts from the i dentification of local essential nodes called central nodes of the network; communities expand later from these central nodes by optimizing a modularity function. In this paper, we propose a new central node indicator and a new modularity function. Our central node indicator, which we call local centrality indicator (LCI), is as efficient as the well-known global maximal degree indicator and local maximal degree indicator; on certain special network structure, LCI performs even better. On the other hand, our modularity function F2 overcomes certain disadvantages,such as the resolution limit problem,of the modularity functions raised in previous literature. Combined with a greedy algorithm, LCI and F2 enable us to identify the right community structures for both the real world networks and the simulated benchmark network. Evaluation based on the normalized mutual information (NMI) suggests that our community detection method with a greedy algorithm based on LCI and F2 performs superior to many other methods. Therefore, the method we proposed in this paper is potentially noteworthy.
We introduce a new conception of community structure, which we refer to as hidden community structure. Hidden community structure refers to a specific type of overlapping community structure, in which the detection of weak, but meaningful, communitie s is hindered by the presence of stronger communities. We present Hidden Community Detection HICODE, an algorithm template that identifies both the strong, dominant community structure as well as the weaker, hidden community structure in networks. HICODE begins by first applying an existing community detection algorithm to a network, and then removing the structure of the detected communities from the network. In this way, the structure of the weaker communities becomes visible. Through application of HICODE, we demonstrate that a wide variety of real networks from different domains contain many communities that, though meaningful, are not detected by any of the popular community detection algorithms that we consider. Additionally, on both real and synthetic networks containing a hidden ground-truth community structure, HICODE uncovers this structure better than any baseline algorithms that we compared against. For example, on a real network of undergraduate students that can be partitioned either by `Dorm (residence hall) or `Year, we see that HICODE uncovers the weaker `Year communities with a JCRecall score (a recall-based metric that we define in the text) of over 0.7, while the baseline algorithms achieve scores below 0.2.
Competition networks are formed via adversarial interactions between actors. The Dynamic Competition Hypothesis predicts that influential actors in competition networks should have a large number of common out-neighbors with many other nodes. We empi rically study this idea as a centrality score and find the measure predictive of importance in several real-world networks including food webs, conflict networks, and voting data from Survivor.
As relational datasets modeled as graphs keep increasing in size and their data-acquisition is permeated by uncertainty, graph-based analysis techniques can become computationally and conceptually challenging. In particular, node centrality measures rely on the assumption that the graph is perfectly known -- a premise not necessarily fulfilled for large, uncertain networks. Accordingly, centrality measures may fail to faithfully extract the importance of nodes in the presence of uncertainty. To mitigate these problems, we suggest a statistical approach based on graphon theory: we introduce formal definitions of centrality measures for graphons and establish their connections to classical graph centrality measures. A key advantage of this approach is that centrality measures defined at the modeling level of graphons are inherently robust to stochastic variations of specific graph realizations. Using the theory of linear integral operators, we define degree, eigenvector, Katz and PageRank centrality functions for graphons and establish concentration inequalities demonstrating that graphon centrality functions arise naturally as limits of their counterparts defined on sequences of graphs of increasing size. The same concentration inequalities also provide high-probability bounds between the graphon centrality functions and the centrality measures on any sampled graph, thereby establishing a measure of uncertainty of the measured centrality score. The same concentration inequalities also provide high-probability bounds between the graphon centrality functions and the centrality measures on any sampled graph, thereby establishing a measure of uncertainty of the measured centrality score.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا