ترغب بنشر مسار تعليمي؟ اضغط هنا

Network embedding techniques aim at representing structural properties of graphs in geometric space. Those representations are considered useful in downstream tasks such as link prediction and clustering. However, the number of graph embedding method s available on the market is large, and practitioners face the non-trivial choice of selecting the proper approach for a given application. The present work attempts to close this gap of knowledge through a systematic comparison of eleven different methods for graph embedding. We consider methods for embedding networks in the hyperbolic and Euclidean metric spaces, as well as non-metric community-based embedding methods. We apply these methods to embed more than one hundred real-world and synthetic networks. Three common downstream tasks -- mapping accuracy, greedy routing, and link prediction -- are considered to evaluate the quality of the various embedding methods. Our results show that some Euclidean embedding methods excel in greedy routing. As for link prediction, community-based and hyperbolic embedding methods yield overall performance superior than that of Euclidean-space-based approaches. We compare the running time for different methods and further analyze the impact of different network characteristics such as degree distribution, modularity, and clustering coefficients on the quality of the different embedding methods. We release our evaluation framework to provide a standardized benchmark for arbitrary embedding methods.
The fundamental idea of embedding a network in a metric space is rooted in the principle of proximity preservation. Nodes are mapped into points of the space with pairwise distance that reflects their proximity in the network. Popular methods employe d in network embedding either rely on implicit approximations of the principle of proximity preservation or implement it by enforcing the geometry of the embedding space, thus hindering geometric properties that networks may spontaneously exhibit. Here, we take advantage of a model-free embedding method explicitly devised for preserving pairwise proximity, and characterize the geometry emerging from the mapping of several networks, both real and synthetic. We show that the learned embedding has simple and intuitive interpretations: the distance of a node from the geometric center is representative for its closeness centrality, and the relative positions of nodes reflect the community structure of the network. Proximity can be preserved in relatively low-dimensional embedding spaces, and the hidden geometry displays optimal performance in guiding greedy navigation regardless of the specific network topology. We finally show that the mapping provides a natural description of contagion processes on networks, with complex spatiotemporal patterns represented by waves propagating from the geometric center to the periphery. The findings deepen our understanding of the model-free hidden geometry of complex networks.
Containment measures implemented by some countries to suppress the spread of COVID-19 have resulted in a slowdown of the epidemic characterized by time series of daily infections plateauing over extended periods of time. We prove that such a dynamica l pattern is compatible with critical Susceptible-Infected-Removed (SIR) dynamics. In traditional analyses of the critical SIR model, the critical dynamical regime is started from a single infected node. The application of containment measures to an ongoing epidemic, however, has the effect to make the system enter in its critical regime with a number of infected individuals potentially large. We describe how such non-trivial starting conditions affect the critical behavior of the SIR model. We perform a theoretical and large-scale numerical investigation of the model. We show that the expected outbreak size is an increasing function of the initial number of infected individuals, while the expected duration of the outbreak is a non-monotonic function of the initial number of infected individuals. Also, we precisely characterize the magnitude of the fluctuations associated with the size and duration of the outbreak in critical SIR dynamics with non-trivial initial conditions. Far from heard immunity, fluctuations are much larger than average values, thus indicating that predictions of plateauing time series may be particularly challenging.
Multiplex networks are convenient mathematical representations for many real-world -- biological, social, and technological -- systems of interacting elements, where pairwise interactions among elements have different flavors. Previous studies pointe d out that real-world multiplex networks display significant inter-layer correlations -- degree-degree correlation, edge overlap, node similarities -- able to make them robust against random and targeted failures of their individual components. Here, we show that inter-layer correlations are important also in the characterization of their $mathbf{k}$-core structure, namely the organization in shells of nodes with increasingly high degree. Understanding $k$-core structures is important in the study of spreading processes taking place on networks, as for example in the identification of influential spreaders and the emergence of localization phenomena. We find that, if the degree distribution of the network is heterogeneous, then a strong $mathbf{k}$-core structure is well predicted by significantly positive degree-degree correlations. However, if the network degree distribution is homogeneous, then strong $mathbf{k}$-core structure is due to positive correlations at the level of node similarities. We reach our conclusions by analyzing different real-world multiplex networks, introducing novel techniques for controlling inter-layer correlations of networks without changing their structure, and taking advantage of synthetic network models with tunable levels of inter-layer correlations.
Existing information-theoretic frameworks based on maximum entropy network ensembles are not able to explain the emergence of heterogeneity in complex networks. Here, we fill this gap of knowledge by developing a classical framework for networks base d on finding an optimal trade-off between the information content of a compressed representation of the ensemble and the information content of the actual network ensemble. In this way not only we introduce a novel classical network ensemble satisfying a set of soft constraints but we are also able to calculate the optimal distribution of the constraints. We show that for the classical network ensemble in which the only constraints are the expected degrees a power-law degree distribution is optimal. Also, we study spatially embedded networks finding that the interactions between nodes naturally lead to non-uniform spread of nodes in the space, with pairs of nodes at a given distance not necessarily obeying a power-law distribution. The pertinent features of real-world air transportation networks are well described by the proposed framework.
Complex networks have acquired a great popularity in recent years, since the graph representation of many natural, social and technological systems is often very helpful to characterize and model their phenomenology. Additionally, the mathematical to ols of statistical physics have proven to be particularly suitable for studying and understanding complex networks. Nevertheless, an important obstacle to this theoretical approach is still represented by the difficulties to draw parallelisms between network science and more traditional aspects of statistical physics. In this paper, we explore the relation between complex networks and a well known topic of statistical physics: renormalization. A general method to analyze renormalization flows of complex networks is introduced. The method can be applied to study any suitable renormalization transformation. Finite-size scaling can be performed on computer-generated networks in order to classify them in universality classes. We also present applications of the method on real networks.
Community structure is one of the most important features of real networks and reveals the internal organization of the nodes. Many algorithms have been proposed but the crucial issue of testing, i.e. the question of how good an algorithm is, with re spect to others, is still open. Standard tests include the analysis of simple artificial graphs with a built-in community structure, that the algorithm has to recover. However, the special graphs adopted in actual tests have a structure that does not reflect the real properties of nodes and communities found in real networks. Here we introduce a new class of benchmark graphs, that account for the heterogeneity in the distributions of node degrees and of community sizes. We use this new benchmark to test two popular methods of community detection, modularity optimization and Potts model clustering. The results show that the new benchmark poses a much more severe test to algorithms than standard benchmarks, revealing limits that may not be apparent at a first analysis.
We study the distributions of citations received by a single publication within several disciplines, spanning broad areas of science. We show that the probability that an article is cited $c$ times has large variations between different disciplines, but all distributions are rescaled on a universal curve when the relative indicator $c_f=c/c_0$ is considered, where $c_0$ is the average number of citations per article for the discipline. In addition we show that the same universal behavior occurs when citation distributions of articles published in the same field, but in different years, are compared. These findings provide a strong validation of $c_f$ as an unbiased indicator for citation performance across disciplines and years. Based on this indicator, we introduce a generalization of the h-index suitable for comparing scientists working in different fields.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا