ترغب بنشر مسار تعليمي؟ اضغط هنا

Understanding scaling through history-dependent processes with collapsing sample space

73   0   0.0 ( 0 )
 نشر من قبل Bernat Corominas-Murtra BCM
 تاريخ النشر 2014
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

History-dependent processes are ubiquitous in natural and social systems. Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample-space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space reducing (SSR) processes necessarily lead to Zipfs law in the rank distributions of their outcomes. We show that by adding noise to SSR processes the corresponding rank distributions remain exact power-laws, $p(x)sim x^{-lambda}$, where the exponent directly corresponds to the mixing ratio of the SSR process and noise. This allows us to give a precise meaning to the scaling exponent in terms of the degree to how much a given process reduces its sample-space as it unfolds. Noisy SSR processes further allow us to explain a wide range of scaling exponents in frequency distributions ranging from $alpha = 2$ to $infty$. We discuss several applications showing how SSR processes can be used to understand Zipfs law in word frequencies, and how they are related to diffusion processes in directed networks, or ageing processes such as in fragmentation processes. SSR processes provide a new alternative to understand the origin of scaling in complex systems without the recourse to multiplicative, preferential, or self-organised critical processes.

قيم البحث

اقرأ أيضاً

The formation of sentences is a highly structured and history-dependent process. The probability of using a specific word in a sentence strongly depends on the history of word-usage earlier in that sentence. We study a simple history-dependent model of text generation assuming that the sample-space of word usage reduces along sentence formation, on average. We first show that the model explains the approximate Zipf law found in word frequencies as a direct consequence of sample-space reduction. We then empirically quantify the amount of sample-space reduction in the sentences of ten famous English books, by analysis of corresponding word-transition tables that capture which words can follow any given word in a text. We find a highly nested structure in these transition tables and show that this `nestedness is tightly related to the power law exponents of the observed word frequency distributions. With the proposed model it is possible to understand that the nestedness of a text can be the origin of the actual scaling exponent, and that deviations from the exact Zipf law can be understood by variations of the degree of nestedness on a book-by-book basis. On a theoretical level we are able to show that in case of weak nesting, Zipfs law breaks down in a fast transition. Unlike previous attempts to understand Zipfs law in language the sample-space reducing model is not based on assumptions of multiplicative, preferential, or self-organised critical mechanisms behind language formation, but simply used the empirically quantifiable parameter nestedness to understand the statistics of word frequencies.
It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such Sample Space Reducing processes (SSRP) offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterized by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to $-1$ (Zipfs law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws law in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipfs law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management.
178 - M.C. Gonzalez , C.A. Hidalgo , 2008
Despite their importance for urban planning, traffic forecasting, and the spread of biological and mobile viruses, our understanding of the basic laws governing human motion remains limited thanks to the lack of tools to monitor the time resolved loc ation of individuals. Here we study the trajectory of 100,000 anonymized mobile phone users whose position is tracked for a six month period. We find that in contrast with the random trajectories predicted by the prevailing Levy flight and random walk models, human trajectories show a high degree of temporal and spatial regularity, each individual being characterized by a time independent characteristic length scale and a significant probability to return to a few highly frequented locations. After correcting for differences in travel distances and the inherent anisotropy of each trajectory, the individual travel patterns collapse into a single spatial probability distribution, indicating that despite the diversity of their travel history, humans follow simple reproducible patterns. This inherent similarity in travel patterns could impact all phenomena driven by human mobility, from epidemic prevention to emergency response, urban planning and agent based modeling.
In some systems, the connecting probability (and thus the percolation process) between two sites depends on the geometric distance between them. To understand such process, we propose gravitationally correlated percolation models for link-adding netw orks on the two-dimensional lattice $G$ with two strategies $S_{rm max}$ and $S_{rm min}$, to add a link $l_{i,j}$ to connect site $i$ and site $j$ with mass $m_i$ and $m_j$, respectively; $m_i$ and $m_j$ are sizes of the clusters which contain site $i$ and site $j$, respectively. The probability to add the link $l_{i,j}$ is related to the generalized gravity $g_{ij} equiv m_i m_j/r_{ij}^d$, where $r_{ij}$ is the geometric distance between $i$ and $j$, and $d$ is an adjustable decaying exponent. In the beginning of the simulation, all sites of $G$ are occupied and there is no link. In the simulation process, two inter-cluster links $l_{i,j}$ and $l_{k,n}$ are randomly chosen and the generalized gravities $g_{ij}$ and $g_{kn}$ are computed. In the strategy $S_{rm max}$, the link with larger generalized gravity is added. In the strategy $S_{rm min}$, the link with smaller generalized gravity is added, which include percolation on the ErdH os-Renyi random graph and the Achlioptas process of explosive percolation as the limiting cases, $d to infty$ and $d to 0$, respectively. Adjustable strategies facilitate or inhibit the network percolation in a generic view. We calculate percolation thresholds $T_c$ and critical exponents $beta$ by numerical simulations. We also obtain various finite-size scaling functions for the node fractions in percolating clusters or arrival of saturation length with different intervening strategies.
Network growth processes can be understood as generative models of the structure and history of complex networks. This point of view naturally leads to the problem of network archaeology: reconstructing all the past states of a network from its struc ture---a difficult permutation inference problem. In this paper, we introduce a Bayesian formulation of network archaeology, with a generalization of preferential attachment as our generative mechanism. We develop a sequential Monte Carlo algorithm to evaluate the posterior averages of this model, as well as an efficient heuristic that uncovers a history well correlated with the true one, in polynomial time. We use these methods to identify and characterize a phase transition in the quality of the reconstructed history, when they are applied to artificial networks generated by the model itself. Despite the existence of a no-recovery phase, we find that nontrivial inference is possible in a large portion of the parameter space as well as on empirical data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا