ترغب بنشر مسار تعليمي؟ اضغط هنا

How information is encoded in bio-molecular sequences is difficult to quantify since such an analysis usually requires sampling an exponentially large genetic space. Here we show how information theory reveals both robust and compressed encodings in the largest complete genotype-phenotype map (over 5 trillion sequences) obtained to date.
69 - Christoph Adami 2020
The origin of the uncertainty inherent in quantum measurements has been discussed since quantum theorys inception, but to date the source of the indeterminacy of measurements performed at an angle with respect to a quantum states preparation is unkno wn. Here I propose that quantum uncertainty is a manifestation of the indeterminism inherent in mathematical logic. By explicitly constructing pairs of classical Turing machines that write into each others program space, I show that the joint state of such a pair is determined, while the state of the individual machine is not, precisely as in quantum measurement. In particular, the eigenstate of the individual machines are undefined, but they appear to be superpositions of classical states, albeit with vanishing eigenvalue. Because these classically entangled Turing machines essentially implement undecidable halting problems, this construction suggests that the inevitable randomness that results when interrogating such machines about their state is precisely the randomness inherent in the bits of Chaitins halting probability.
To infer information flow in any network of agents, it is important first and foremost to establish causal temporal relations between the nodes. Practical and automated methods that can infer causality are difficult to find, and the subject of ongoin g research. While Shannon information only detects correlation, there are several information-theoretic notions of directed information that have successfully detected causality in some systems, in particular in the neuroscience community. However, recent work has shown that some directed information measures can sometimes inadequately estimate the extent of causal relations, or even fail to identify existing cause-effect relations between components of systems, especially if neurons contribute in a cryptographic manner to influence the effector neuron. Here, we test how often cryptographic logic emerges in an evolutionary process that generates artificial neural circuits for two fundamental cognitive tasks: motion detection and sound localization. We also test whether activity time-series recorded from behaving digital brains can infer information flow using the transfer entropy concept, when compared to a ground-truth model of causal influence constructed from connectivity and circuit logic. Our results suggest that transfer entropy will sometimes fail to infer causality when it exists, and sometimes suggest a causal connection when there is none. However, the extent of incorrect inference strongly depends on the cognitive task considered. These results emphasize the importance of understanding the fundamental logic processes that contribute to information flow in cognitive processing, and quantifying their relevance in any given nervous system.
A central goal of evolutionary biology is to explain the origins and distribution of diversity across life. Beyond species or genetic diversity, we also observe diversity in the circuits (genetic or otherwise) underlying complex functional traits. Ho wever, while the theory behind the origins and maintenance of genetic and species diversity has been studied for decades, theory concerning the origin of diverse functional circuits is still in its infancy. It is not known how many different circuit structures can implement any given function, which evolutionary factors lead to different circuits, and whether the evolution of a particular circuit was due to adaptive or non-adaptive processes. Here, we use digital experimental evolution to study the diversity of neural circuits that encode motion detection in digital (artificial) brains. We find that evolution leads to an enormous diversity of potential neural architectures encoding motion detection circuits, even for circuits encoding the exact same function. Evolved circuits vary in both redundancy and complexity (as previously found in genetic circuits) suggesting that similar evolutionary principles underlie circuit formation using any substrate. We also show that a simple (designed) motion detection circuit that is optimally-adapted gains in complexity when evolved further, and that selection for mutational robustness led this gain in complexity.
We present a model of decentralized growth for Artificial Neural Networks (ANNs) inspired by the development and the physiology of real nervous systems. In this model, each individual artificial neuron is an autonomous unit whose behavior is determin ed only by the genetic information it harbors and local concentrations of substrates modeled by a simple artificial chemistry. Gene expression is manifested as axon and dendrite growth, cell division and differentiation, substrate production and cell stimulation. We demonstrate the models power with a hand-written genome that leads to the growth of a simple network which performs classical conditioning. To evolve more complex structures, we implemented a platform-independent, asynchronous, distributed Genetic Algorithm (GA) that allows users to participate in evolutionary experiments via the World Wide Web.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا