Do you want to publish a course? Click here

Taylors law in innovation processes

115   0   0.0 ( 0 )
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

Taylors law quantifies the scaling properties of the fluctuations of the number of innovations occurring in open systems. Urn based modelling schemes have already proven to be effective in modelling this complex behaviour. Here, we present analytical estimations of Taylors law exponents in such models, by leveraging on their representation in terms of triangular urn models. We also highlight the correspondence of these models with Poisson-Dirichlet processes and demonstrate how a non-trivial Taylors law exponent is a kind of universal feature in systems related to human activities. We base this result on the analysis of four collections of data generated by human activity: (i) written language (from a Gutenberg corpus); (ii) a n online music website (Last.fm); (iii) Twitter hashtags; (iv) a on-line collaborative tagging system (Del.icio.us). While Taylors law observed in the last two datasets agrees with the plain model predictions, we need to introduce a generalization to fully characterize the behaviour of the first two datasets, where temporal correlations are possibly more relevant. We suggest that Taylors law is a fundamental complement to Zipfs and Heaps laws in unveiling the complex dynamical processes underlying the evolution of systems featuring innovation.



rate research

Read More

The sparsity and compressibility of finite-dimensional signals are of great interest in fields such as compressed sensing. The notion of compressibility is also extended to infinite sequences of i.i.d. or ergodic random variables based on the observed error in their nonlinear k-term approximation. In this work, we use the entropy measure to study the compressibility of continuous-domain innovation processes (alternatively known as white noise). Specifically, we define such a measure as the entropy limit of the doubly quantized (time and amplitude) process. This provides a tool to compare the compressibility of various innovation processes. It also allows us to identify an analogue of the concept of entropy dimension which was originally defined by Renyi for random variables. Particular attention is given to stable and impulsive Poisson innovation processes. Here, our results recognize Poisson innovations as the more compressible ones with an entropy measure far below that of stable innovations. While this result departs from the previous knowledge regarding the compressibility of fat-tailed distributions, our entropy measure ranks stable innovations according to their tail decay.
We propose a simple model where the innovation rate of a technological domain depends on the innovation rate of the technological domains it relies on. Using data on US patents from 1836 to 2017, we make out-of-sample predictions and find that the predictability of innovation rates can be boosted substantially when network effects are taken into account. In the case where a technology$$s neighborhood future innovation rates are known, the average predictability gain is 28$%$ compared to simpler time series model which do not incorporate network effects. Even when nothing is known about the future, we find positive average predictability gains of 20$%$. The results have important policy implications, suggesting that the effective support of a given technology must take into account the technological ecosystem surrounding the targeted technology.
In this paper, we propose a spatially constrained clustering problem belonging to the family of p-regions problems. Our formulation is motivated by the recent developments of economic complexity on the evolution of the economic output through key interactions among industries within economic regions. The objective of this model consists in aggregating a set of geographic areas into a prescribed number of regions (so-called innovation ecosystems) such that the resulting regions preserve the most relevant interactions among industries. We formulate the p-Innovation Ecosystems model as a mixed-integer programming (MIP) problem and propose a heuristic solution approach. We explore a case involving the municipalities of Colombia to illustrate how such a model can be applied and used for policy and regional development.
It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such Sample Space Reducing processes (SSRP) offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterized by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to $-1$ (Zipfs law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws law in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipfs law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management.
We generalize the classical Bass model of innovation diffusion to include a new class of agents --- Luddites --- that oppose the spread of innovation. Our model also incorporates ignorants, susceptibles, and adopters. When an ignorant and a susceptible meet, the former is converted to a susceptible at a given rate, while a susceptible spontaneously adopts the innovation at a constant rate. In response to the emph{rate} of adoption, an ignorant may become a Luddite and permanently reject the innovation. Instead of reaching complete adoption, the final state generally consists of a population of Luddites, ignorants, and adopters. The evolution of this system is investigated analytically and by stochastic simulations. We determine the stationary distribution of adopters, the time needed to reach the final state, and the influence of the network topology on the innovation spread. Our model exhibits an important dichotomy: when the rate of adoption is low, an innovation spreads slowly but widely; in contrast, when the adoption rate is high, the innovation spreads rapidly but the extent of the adoption is severely limited by Luddites.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا