ترغب بنشر مسار تعليمي؟ اضغط هنا

Isotropy, entropy, and energy scaling

116   0   0.0 ( 0 )
 نشر من قبل Robert Shour
 تاريخ النشر 2012
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Robert Shour




اسأل ChatGPT حول البحث

Two principles explain emergence. First, in the Receipts reference frame, Deg(S) = 4/3 Deg(R), where Supply S is an isotropic radiative energy source, Receipt R receives Ss energy, and Deg is a systems degrees of freedom based on its mean path length. Ss 1/3 more degrees of freedom relative to R enables Rs growth and increasing complexity. Second, rho(R) = Deg(R) times rho(r), where rho(R) represents the collective rate of R and rho(r) represents the rate of an individual in R: as Deg(R) increases due to the first principle, the multiplier effect of networking in R increases. A universe like ours with isotropic energy distribution, in which both principles are operative, is therefore predisposed to exhibit emergence, and, for reasons shown, a ubiquitous role for the natural logarithm.



قيم البحث

اقرأ أيضاً

Transfer entropy has been used to quantify the directed flow of information between source and target variables in many complex systems. While transfer entropy was originally formulated in discrete time, in this paper we provide a framework for consi dering transfer entropy in continuous time systems, based on Radon-Nikodym derivatives between measures of complete path realizations. To describe the information dynamics of individual path realizations, we introduce the pathwise transfer entropy, the expectation of which is the transfer entropy accumulated over a finite time interval. We demonstrate that this formalism permits an instantaneous transfer entropy rate. These properties are analogous to the behavior of physical quantities defined along paths such as work and heat. We use this approach to produce an explicit form for the transfer entropy for pure jump processes, and highlight the simplified form in the specific case of point processes (frequently used in neuroscience to model neural spike trains). Finally, we present two synthetic spiking neuron model examples to exhibit the pertinent features of our formalism, namely, that the information flow for point processes consists of discontinuous jump contributions (at spikes in the target) interrupting a continuously varying contribution (relating to waiting times between target spikes). Numerical schemes based on our formalism promise significant benefits over existing strategies based on discrete time formalisms.
We introduce an axiomatic approach to entropies and relative entropies that relies only on minimal information-theoretic axioms, namely monotonicity under mixing and data-processing as well as additivity for product distributions. We find that these axioms induce sufficient structure to establish continuity in the interior of the probability simplex and meaningful upper and lower bounds, e.g., we find that every relative entropy must lie between the Renyi divergences of order $0$ and $infty$. We further show simple conditions for positive definiteness of such relative entropies and a characterisation in term of a variant of relative trumping. Our main result is a one-to-one correspondence between entropies and relative entropies.
We produce a series of results extending information-theoretical inequalities (discussed by Dembo--Cover--Thomas in 1989-1991) to a weighted version of entropy. The resulting inequalities involve the Gaussian weighted entropy; they imply a number of new relations for determinants of positive-definite matrices.
What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergisti cally to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.
154 - Robert Shour 2009
If each node of an idealized network has an equal capacity to efficiently exchange benefits, then the networks capacity to use energy is scaled by the average amount of energy required to connect any two of its nodes. The scaling factor equals textit {e}, and the networks entropy is $ln(n)$. Networking emerges in consequence of nodes minimizing the ratio of their energy use to the benefits obtained for such use, and their connectability. Networking leads to nested hierarchical clustering, which multiplies a networks capacity to use its energy to benefit its nodes. Network entropy multiplies a nodes capacity. For a real network in which the nodes have the capacity to exchange benefits, network entropy may be estimated as $C log_L(n)$, where the base of the log is the path length $L$, and $C$ is the clustering coefficient. Since $n$, $L$ and $C$ can be calculated for real networks, network entropy for real networks can be calculated and can reveal aspects of emergence and also of economic, biological, conceptual and other networks, such as the relationship between rates of lexical growth and divergence, and the economic benefit of adding customers to a commercial communications network. textit{Entropy dating} can help estimate the age of network processes, such as the growth of hierarchical society and of language.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا