No Arabic abstract
We propose a novel algorithm that outputs the final standings of a soccer league, based on a simple dynamics that mimics a soccer tournament. In our model, a team is created with a defined potential(ability) which is updated during the tournament according to the results of previous games. The updated potential modifies a teams future winning/losing probabilities. We show that this evolutionary game is able to reproduce the statistical properties of final standings of actual editions of the Brazilian tournament (Brasileir~{a}o). However, other leagues such as the Italian and the Spanish tournaments have notoriously non-Gaussian traces and cannot be straightforwardly reproduced by this evolutionary non-Markovian model. A complete understanding of these phenomena deserves much more attention, but we suggest a simple explanation based on data collected in Brazil: Here several teams were crowned champion in previous editions corroborating that the champion typically emerges from random fluctuations that partly preserves the gaussian traces during the tournament. On the other hand, in the Italian and Spanish leagues only a few teams in recent history have won their league tournaments. These leagues are based on more robust and hierarchical structures established even before the beginning of the tournament. For the sake of completeness, we also elaborate a totally Gaussian model (which equalizes the winning, drawing, and losing probabilities) and we show that the scores of the Brasileir~{a}o cannot be reproduced. Such aspects stress that evolutionary aspects are not superfluous in our modeling. Finally, we analyse the distortions of our model in situations where a large number of teams is considered, showing the existence of a transition from a single to a double peaked histogram of the final classification scores. An interesting scaling is presented for different sized tournaments.
Recent studies show that in interdependent networks a very small failure in one network may lead to catastrophic consequences. Above a critical fraction of interdependent nodes, even a single node failure can invoke cascading failures that may abruptly fragment the system, while below this critical dependency (CD) a failure of few nodes leads only to small damage to the system. So far, the research has been focused on interdependent random networks without space limitations. However, many real systems, such as power grids and the Internet, are not random but are spatially embedded. Here we analytically and numerically analyze the stability of systems consisting of interdependent spatially embedded networks modeled as lattice networks. Surprisingly, we find that in lattice systems, in contrast to non-embedded systems, there is no CD and textit{any} small fraction of interdependent nodes leads to an abrupt collapse. We show that this extreme vulnerability of very weakly coupled lattices is a consequence of the critical exponent describing the percolation transition of a single lattice. Our results are important for understanding the vulnerabilities and for designing robust interdependent spatial embedded networks.
We study the avalanche statistics observed in a minimal random growth model. The growth is governed by a reproduction rate obeying a probability distribution with finite mean a and variance va. These two control parameters determine if the avalanche size tends to a stationary distribution, (Finite Scale statistics with finite mean and variance or Power-Law tailed statistics with exponent in (1, 3]), or instead to a non-stationary regime with Log-Normal statistics. Numerical results and their statistical analysis are presented for a uniformly distributed growth rate, which are corroborated and generalized by analytical results. The latter show that the numerically observed avalanche regimes exist for a wide family of growth rate distributions and provide a precise definition of the boundaries between the three regimes.
For any branching process, we demonstrate that the typical total number $r_{rm mp}( u tau)$ of events triggered over all generations within any sufficiently large time window $tau$ exhibits, at criticality, a super-linear dependence $r_{rm mp}( u tau) sim ( u tau)^gamma$ (with $gamma >1$) on the total number $ u tau$ of the immigrants arriving at the Poisson rate $ u$. In branching processes in which immigrants (or sources) are characterized by fertilities distributed according to an asymptotic power law tail with tail exponent $1 < gamma leqslant 2$, the exponent of the super-linear law for $r_{rm mp}( u tau)$ is identical to the exponent $gamma$ of the distribution of fertilities. For $gamma>2$ and for standard branching processes without power law distribution of fertilities, $r_{rm mp}( u tau) sim ( u tau)^2$. This novel scaling law replaces and tames the divergence $ u tau/(1-n)$ of the mean total number ${bar R}_t(tau)$ of events, as the branching ratio (defined as the average number of triggered events of first generation per source) tends to 1. The derivation uses the formalism of generating probability functions. The corresponding prediction is confirmed by numerical calculations and an heuristic derivation enlightens its underlying mechanism. We also show that ${bar R}_t(tau)$ is always linear in $ u tau$ even at criticality ($n=1$). Our results thus illustrate the fundamental difference between the mean total number, which is controlled by a few extremely rare realizations, and the typical behavior represented by $r_{rm mp}( u tau)$.
An efficient technique is introduced for model inference of complex nonlinear dynamical systems driven by noise. The technique does not require extensive global optimization, provides optimal compensation for noise-induced errors and is robust in a broad range %of parameters of dynamical models. It is applied to clinically measured blood pressure signal for the simultaneous inference of the strength, directionality, and the noise intensities in the nonlinear interaction between the cardiac and respiratory oscillations.
We have studied the distribution of traffic flow $q$ for the Nagel-Schreckenberg model by computer simulations. We applied a large-deviation approach, which allowed us to obtain the distribution $P(q)$ over more than one hundred decades in probability, down to probabilities like $10^{-140}$. This allowed us to characterize the flow distribution over a large range of the support and identify the characteristics of rare and even very rare traffic situations. We observe a change of the distribution shape when increasing the density of cars from the free flow to the congestion phase. Furthermore, we characterize typical and rare traffic situations by measuring correlations of $q$ to other quantities like density of standing cars or number and size of traffic jams.