ترغب بنشر مسار تعليمي؟ اضغط هنا

Weibull-type limiting distribution for replicative systems

194   0   0.0 ( 0 )
 نشر من قبل Junghyo Jo
 تاريخ النشر 2011
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The Weibull function is widely used to describe skew distributions observed in nature. However, the origin of this ubiquity is not always obvious to explain. In the present paper, we consider the well-known Galton-Watson branching process describing simple replicative systems. The shape of the resulting distribution, about which little has been known, is found essentially indistinguishable from the Weibull form in a wide range of the branching parameter; this can be seen from the exact series expansion for the cumulative distribution, which takes a universal form. We also find that the branching process can be mapped into a process of aggregation of clusters. In the branching and aggregation process, the number of events considered for branching and aggregation grows cumulatively in time, whereas, for the binomial distribution, an independent event occurs at each time with a given success probability.



قيم البحث

اقرأ أيضاً

An efficient technique is introduced for model inference of complex nonlinear dynamical systems driven by noise. The technique does not require extensive global optimization, provides optimal compensation for noise-induced errors and is robust in a b road range %of parameters of dynamical models. It is applied to clinically measured blood pressure signal for the simultaneous inference of the strength, directionality, and the noise intensities in the nonlinear interaction between the cardiac and respiratory oscillations.
We study the avalanche statistics observed in a minimal random growth model. The growth is governed by a reproduction rate obeying a probability distribution with finite mean a and variance va. These two control parameters determine if the avalanche size tends to a stationary distribution, (Finite Scale statistics with finite mean and variance or Power-Law tailed statistics with exponent in (1, 3]), or instead to a non-stationary regime with Log-Normal statistics. Numerical results and their statistical analysis are presented for a uniformly distributed growth rate, which are corroborated and generalized by analytical results. The latter show that the numerically observed avalanche regimes exist for a wide family of growth rate distributions and provide a precise definition of the boundaries between the three regimes.
A theory of systems with long-range correlations based on the consideration of binary N-step Markov chains is developed. In the model, the conditional probability that the i-th symbol in the chain equals zero (or unity) is a linear function of the nu mber of unities among the preceding N symbols. The correlation and distribution functions as well as the variance of number of symbols in the words of arbitrary length L are obtained analytically and numerically. A self-similarity of the studied stochastic process is revealed and the similarity group transformation of the chain parameters is presented. The diffusion Fokker-Planck equation governing the distribution function of the L-words is explored. If the persistent correlations are not extremely strong, the distribution function is shown to be the Gaussian with the variance being nonlinearly dependent on L. The applicability of the developed theory to the coarse-grained written and DNA texts is discussed.
Deep neural networks, when optimized with sufficient data, provide accurate representations of high-dimensional functions; in contrast, function approximation techniques that have predominated in scientific computing do not scale well with dimensiona lity. As a result, many high-dimensional sampling and approximation problems once thought intractable are being revisited through the lens of machine learning. While the promise of unparalleled accuracy may suggest a renaissance for applications that require parameterizing representations of complex systems, in many applications gathering sufficient data to develop such a representation remains a significant challenge. Here we introduce an approach that combines rare events sampling techniques with neural network optimization to optimize objective functions that are dominated by rare events. We show that importance sampling reduces the asymptotic variance of the solution to a learning problem, suggesting benefits for generalization. We study our algorithm in the context of learning dynamical transition pathways between two states of a system, a problem with applications in statistical physics and implications in machine learning theory. Our numerical experiments demonstrate that we can successfully learn even with the compounding difficulties of high-dimension and rare data.
Exponential Random Graph Models (ERGMs) have gained increasing popularity over the years. Rooted into statistical physics, the ERGMs framework has been successfully employed for reconstructing networks, detecting statistically significant patterns in graphs, counting networked configurations with given properties. From a technical point of view, the ERGMs workflow is defined by two subsequent optimization steps: the first one concerns the maximization of Shannon entropy and leads to identify the functional form of the ensemble probability distribution that is maximally non-committal with respect to the missing information; the second one concerns the maximization of the likelihood function induced by this probability distribution and leads to its numerical determination. This second step translates into the resolution of a system of $O(N)$ non-linear, coupled equations (with $N$ being the total number of nodes of the network under analysis), a problem that is affected by three main issues, i.e. accuracy, speed and scalability. The present paper aims at addressing these problems by comparing the performance of three algorithms (i.e. Newtons method, a quasi-Newton method and a recently-proposed fixed-point recipe) in solving several ERGMs, defined by binary and weighted constraints in both a directed and an undirected fashion. While Newtons method performs best for relatively little networks, the fixed-point recipe is to be preferred when large configurations are considered, as it ensures convergence to the solution within seconds for networks with hundreds of thousands of nodes (e.g. the Internet, Bitcoin). We attach to the paper a Python code implementing the three aforementioned algorithms on all the ERGMs considered in the present work.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا