Do you want to publish a course? Click here

A New Methodology for Generalizing Unweighted Network Measures

98   0   0.0 ( 0 )
 Added by Sherief Abdallah
 Publication date 2009
  fields Physics
and research's language is English




Ask ChatGPT about the research

Several important complex network measures that helped discovering common patterns across real-world networks ignore edge weights, an important information in real-world networks. We propose a new methodology for generalizing measures of unweighted networks through a generalization of the cardinality concept of a set of weights. The key observation here is that many measures of unweighted networks use the cardinality (the size) of some subset of edges in their computation. For example, the node degree is the number of edges incident to a node. We define the effective cardinality, a new metric that quantifies how many edges are effectively being used, assuming that an edges weight reflects the amount of interaction across that edge. We prove that a generalized measure, using our method, reduces to the original unweighted measure if there is no disparity between weights, which ensures that the laws that govern the original unweighted measure will also govern the generalized measure when the weights are equal. We also prove that our generalization ensures a partial ordering (among sets of weighted edges) that is consistent with the original unweighted measure, unlike previously developed generalizations. We illustrate the applicability of our method by generalizing four unweighted network measures. As a case study, we analyze four real-world weighted networks using our generalized degree and clustering coefficient. The analysis shows that the generalized degree distribution is consistent with the power-law hypothesis but with steeper decline and that there is a common pattern governing the ratio between the generalized degree and the traditional degree. The analysis also shows that nodes with more uniform weights tend to cluster with nodes that also have more uniform weights among themselves.



rate research

Read More

In this paper we analyse the street network of London both in its primary and dual representation. To understand its properties, we consider three idealised models based on a grid, a static random planar graph and a growing random planar graph. Comparing the models and the street network, we find that the streets of London form a self-organising system whose growth is characterised by a strict interaction between the metrical and informational space. In particular, a principle of least effort appears to create a balance between the physical and the mental effort required to navigate the city.
An increasing demand of environmental radioactivity monitoring comes both from the scientific community and from the society. This requires accurate, reliable and fast response preferably from portable radiation detectors. Thanks to recent improvements in the technology, $gamma$-spectroscopy with sodium iodide scintillators has been proved to be an excellent tool for in-situ measurements for the identification and quantitative determination of $gamma$-ray emitting radioisotopes, reducing time and costs. Both for geological and civil purposes not only $^{40}$K, $^{238}$U, and $^{232}$Th have to be measured, but there is also a growing interest to determine the abundances of anthropic elements, like $^{137}$Cs and $^{131}$I, which are used to monitor the effect of nuclear accidents or other human activities. The Full Spectrum Analysis (FSA) approach has been chosen to analyze the $gamma$-spectra. The Non Negative Least Square (NNLS) and the energy calibration adjustment have been implemented in this method for the first time in order to correct the intrinsic problem related with the $chi ^2$ minimization which could lead to artifacts and non physical results in the analysis. A new calibration procedure has been developed for the FSA method by using in situ $gamma$-spectra instead of calibration pad spectra. Finally, the new method has been validated by acquiring $gamma$-spectra with a 10.16 cm x 10.16 cm sodium iodide detector in 80 different sites in the Ombrone basin, in Tuscany. The results from the FSA method have been compared with the laboratory measurements by using HPGe detectors on soil samples collected in the different sites, showing a satisfactory agreement between them. In particular, the $^{137}$Cs isotopes has been implemented in the analysis since it has been found not negligible during the in-situ measurements.
Computational modeling is widely used to study how humans and organizations search and solve problems in fields such as economics, management, cultural evolution, and computer science. We argue that current computational modeling research on human problem-solving needs to address several fundamental issues in order to generate more meaningful and falsifiable contributions. Based on comparative simulations and a new type of visualization of how to assess the nature of the fitness landscape, we address two key assumptions that approaches such as the NK framework rely on: that the NK captures the continuum of the complexity of empirical fitness landscapes, and that search behavior is a distinct component, independent from the topology of the fitness landscape. We show the limitations of the most common approach to conceptualize how complex, or rugged, a landscape is, as well as how the nature of the fitness landscape is fundamentally intertwined with search behavior. Finally, we outline broader implications for how to simulate problem-solving.
Keywords in scientific articles have found their significance in information filtering and classification. In this article, we empirically investigated statistical characteristics and evolutionary properties of keywords in a very famous journal, namely Proceedings of the National Academy of Science of the United States of America (PNAS), including frequency distribution, temporal scaling behavior, and decay factor. The empirical results indicate that the keyword frequency in PNAS approximately follows a Zipfs law with exponent 0.86. In addition, there is a power-low correlation between the cumulative number of distinct keywords and the cumulative number of keyword occurrences. Extensive empirical analysis on some other journals data is also presented, with decaying trends of most popular keywords being monitored. Interestingly, top journals from various subjects share very similar decaying tendency, while the journals of low impact factors exhibit completely different behavior. Those empirical characters may shed some light on the in-depth understanding of semantic evolutionary behaviors. In addition, the analysis of keyword-based system is helpful for the design of corresponding recommender systems.
Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents $alpha$. By permuting the independently drawn samples from a power-law distribution, we present non-trivial bounds on the memory strength (1st-order autocorrelation) as a function of $alpha$, which are markedly different from the ordinary $pm 1$ bounds for Gaussian or uniform distributions. When $1 < alpha leq 3$, as $alpha$ grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when $alpha > 3$, the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and browsing behavior of Taobao, we find that empirical power-law distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks, and challenge the validity of measures like autocorrelation and assortativity coefficient in heterogeneous systems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا