ترغب بنشر مسار تعليمي؟ اضغط هنا

Recent approaches on elite identification highlighted the important role of {em intermediaries}, by means of a new definition of the core of a multiplex network, the {em generalised} $K$-core. This newly introduced core subgraph crucially incorporate s those individuals who, in spite of not being very connected, maintain the cohesiveness and plasticity of the core. Interestingly, it has been shown that the performance on elite identification of the generalised $K$-core is sensibly better that the standard $K$-core. Here we go further: Over a multiplex social system, we isolate the community structure of the generalised $K$-core and we identify the weakly connected regions acting as bridges between core communities, ensuring the cohesiveness and connectivity of the core region. This gluing region is the {em Weak core} of the multiplex system. We test the suitability of our method on data from the society of 420.000 players of the Massive Multiplayer Online Game {em Pardus}. Results show that the generalised $K$-core displays a clearly identifiable community structure and that the weak core gluing the core communities shows very low connectivity and clustering. Nonetheless, despite its low connectivity, the weak core forms a unique, cohesive structure. In addition, we find that members populating the weak core have the best scores on social performance, when compared to the other elements of the generalised $K$-core. The weak core provides a new angle on understanding the social structure of elites, highlighting those subgroups of individuals whose role is to glue different communities in the core.
The formation of sentences is a highly structured and history-dependent process. The probability of using a specific word in a sentence strongly depends on the history of word-usage earlier in that sentence. We study a simple history-dependent model of text generation assuming that the sample-space of word usage reduces along sentence formation, on average. We first show that the model explains the approximate Zipf law found in word frequencies as a direct consequence of sample-space reduction. We then empirically quantify the amount of sample-space reduction in the sentences of ten famous English books, by analysis of corresponding word-transition tables that capture which words can follow any given word in a text. We find a highly nested structure in these transition tables and show that this `nestedness is tightly related to the power law exponents of the observed word frequency distributions. With the proposed model it is possible to understand that the nestedness of a text can be the origin of the actual scaling exponent, and that deviations from the exact Zipf law can be understood by variations of the degree of nestedness on a book-by-book basis. On a theoretical level we are able to show that in case of weak nesting, Zipfs law breaks down in a fast transition. Unlike previous attempts to understand Zipfs law in language the sample-space reducing model is not based on assumptions of multiplicative, preferential, or self-organised critical mechanisms behind language formation, but simply used the empirically quantifiable parameter nestedness to understand the statistics of word frequencies.
History-dependent processes are ubiquitous in natural and social systems. Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample-space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space reducing (SSR) processes necessarily lead to Zipfs law in the rank distributions of their outcomes. We show that by adding noise to SSR processes the corresponding rank distributions remain exact power-laws, $p(x)sim x^{-lambda}$, where the exponent directly corresponds to the mixing ratio of the SSR process and noise. This allows us to give a precise meaning to the scaling exponent in terms of the degree to how much a given process reduces its sample-space as it unfolds. Noisy SSR processes further allow us to explain a wide range of scaling exponents in frequency distributions ranging from $alpha = 2$ to $infty$. We discuss several applications showing how SSR processes can be used to understand Zipfs law in word frequencies, and how they are related to diffusion processes in directed networks, or ageing processes such as in fragmentation processes. SSR processes provide a new alternative to understand the origin of scaling in complex systems without the recourse to multiplicative, preferential, or self-organised critical processes.
We build a simple model of leveraged asset purchases with margin calls. Investment funds use what is perhaps the most basic financial strategy, called value investing, i.e. systematically attempting to buy underpriced assets. When funds do not borrow , the price fluctuations of the asset are normally distributed and uncorrelated across time. All this changes when the funds are allowed to leverage, i.e. borrow from a bank, to purchase more assets than their wealth would otherwise permit. During good times competition drives investors to funds that use more leverage, because they have higher profits. As leverage increases price fluctuations become heavy tailed and display clustered volatility, similar to what is observed in real markets. Previous explanations of fat tails and clustered volatility depended on irrational behavior, such as trend following. Here instead this comes from the fact that leverage limits cause funds to sell into a falling market: A prudent bank makes itself locally safer by putting a limit to leverage, so when a fund exceeds its leverage limit, it must partially repay its loan by selling the asset. Unfortunately this sometimes happens to all the funds simultaneously when the price is already falling. The resulting nonlinear feedback amplifies large downward price movements. At the extreme this causes crashes, but the effect is seen at every time scale, producing a power law of price disturbances. A standard (supposedly more sophisticated) risk control policy in which individual banks base leverage limits on volatility causes leverage to rise during periods of low volatility, and to contract more quickly when volatility gets high, making these extreme fluctuations even worse.
It is shown that the laws of thermodynamics are extremely robust under generalizations of the form of entropy. Using the Bregman-type relative entropy, the Clausius inequality is proved to be always valid. This implies that thermodynamics is highly u niversal and does not rule out consistent generalization of the maximum entropy method.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا