ترغب بنشر مسار تعليمي؟ اضغط هنا

Observability transitions in clustered networks

127   0   0.0 ( 0 )
 نشر من قبل Takehisa Hasegawa
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

We investigate the effect of clustering on network observability transitions. In the observability model introduced by Yang, Wang, and Motter [Phys. Rev. Lett. 109, 258701 (2012)], a given fraction of nodes are chosen randomly, and they and those neighbors are considered to be observable, while the other nodes are unobservable. For the observability model on random clustered networks, we derive the normalized sizes of the largest observable component (LOC) and largest unobservable component (LUC). Considering the case where the numbers of edges and triangles of each node are given by the Poisson distribution, we find that both LOC and LUC are affected by the networks clustering: more highly-clustered networks have lower critical node fractions for forming macroscopic LOC and LUC, but this effect is small, becoming almost negligible unless the average degree is small. We also evaluate bounds for these critical points to confirm clusterings weak or negligible effect on the network observability transition. The accuracy of our analytical treatment is confirmed by Monte Carlo simulations.

قيم البحث

اقرأ أيضاً

We examine the structure of the percolating cluster (PC) formed by site percolation on a random clustered network (RCN) model. Using the generating functions, we formulate the clustering coefficient and assortative coefficient of the PC. We analytica lly and numerically show that the PC in the highly clustered networks is clustered even at the percolation threshold. The assortativity of the PC depends on the details of the RCN. The PC at the percolation threshold is disassortative when the numbers of edges and triangles of each node are assigned by Poisson distributions, but assortative when each node in an RCN has the same small number of edges, most of which form triangles. This result seemingly contradicts the disassortativity of fractal networks, although the renormalization scheme unveils the disassortative nature of a fractal PC.
A bridge in a graph is an edge whose removal disconnects the graph and increases the number of connected components. We calculate the fraction of bridges in a wide range of real-world networks and their randomized counterparts. We find that real netw orks typically have more bridges than their completely randomized counterparts, but very similar fraction of bridges as their degree-preserving randomizations. We define a new edge centrality measure, called bridgeness, to quantify the importance of a bridge in damaging a network. We find that certain real networks have very large average and variance of bridgeness compared to their degree-preserving randomizations and other real networks. Finally, we offer an analytical framework to calculate the bridge fraction , the average and variance of bridgeness for uncorrelated random networks with arbitrary degree distributions.
In network science complex systems are represented as a mathematical graphs consisting of a set of nodes representing the components and a set of edges representing their interactions. The framework of networks has led to significant advances in the understanding of the structure, formation and function of complex systems. Social and biological processes such as the dynamics of epidemics, the diffusion of information in social media, the interactions between species in ecosystems or the communication between neurons in our brains are all actively studied using dynamical models on complex networks. In all of these systems, the patterns of connections at the individual level play a fundamental role on the global dynamics and finding the most important nodes allows one to better understand and predict their behaviors. An important research effort in network science has therefore been dedicated to the development of methods allowing to find the most important nodes in networks. In this short entry, we describe network centrality measures based on the notions of network traversal they rely on. This entry aims at being an introduction to this extremely vast topic, with many contributions from several fields, and is by no means an exhaustive review of all the literature about network centralities.
Social network analysis tools can infer various attributes just by scrutinizing ones connections. Several researchers have studied the problem faced by an evader whose goal is to strategically rewire their social connections in order to mislead such tools, thereby concealing their private attributes. However, to date, this literature has only considered static networks, while neglecting the more general case of temporal networks, where the structure evolves over time. Driven by this observation, we study how the evader can conceal their importance from an adversary armed with temporal centrality measures. We consider computational and structural aspects of this problem: Is it computationally feasible to calculate optimal ways of hiding? If it is, what network characteristics facilitate hiding? This topic has been studied in static networks, but in this work, we add realism to the problem by considering temporal networks of edges changing in time. We find that it is usually computationally infeasible to find the optimal way of hiding. On the other hand, by manipulating ones contacts, one could add a surprising amount of privacy. Compared to static networks, temporal networks offer more strategies for this type of manipulation and are thus, to some extent, easier to hide in.
In social networks, the collective behavior of large populations can be shaped by a small set of influencers through a cascading process induced by peer pressure. For large-scale networks, efficient identification of multiple influential spreaders wi th a linear algorithm in threshold models that exhibit a first-order transition still remains a challenging task. Here we address this issue by exploring the collective influence in general threshold models of behavior cascading. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a linearly scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given same number of seeds compared with other linearly scalable heuristic approaches.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا