ترغب بنشر مسار تعليمي؟ اضغط هنا

The information content of symbolic sequences (such as nucleic- or amino acid sequences, but also neuronal firings or strings of letters) can be calculated from an ensemble of such sequences, but because information cannot be assigned to single seque nces, we cannot correlate information to other observables attached to the sequence. Here we show that an information score obtained from multivariate (multiple-variable) correlations within sequences of a training ensemble can be used to predict observables of out-of-sample sequences with an accuracy that scales with the complexity of correlations, showing that functional information emerges from a hierarchy of multi-variable correlations.
How information is encoded in bio-molecular sequences is difficult to quantify since such an analysis usually requires sampling an exponentially large genetic space. Here we show how information theory reveals both robust and compressed encodings in the largest complete genotype-phenotype map (over 5 trillion sequences) obtained to date.
69 - Christoph Adami 2020
The origin of the uncertainty inherent in quantum measurements has been discussed since quantum theorys inception, but to date the source of the indeterminacy of measurements performed at an angle with respect to a quantum states preparation is unkno wn. Here I propose that quantum uncertainty is a manifestation of the indeterminism inherent in mathematical logic. By explicitly constructing pairs of classical Turing machines that write into each others program space, I show that the joint state of such a pair is determined, while the state of the individual machine is not, precisely as in quantum measurement. In particular, the eigenstate of the individual machines are undefined, but they appear to be superpositions of classical states, albeit with vanishing eigenvalue. Because these classically entangled Turing machines essentially implement undecidable halting problems, this construction suggests that the inevitable randomness that results when interrogating such machines about their state is precisely the randomness inherent in the bits of Chaitins halting probability.
81 - Christoph Adami 2019
The Leggett-Garg inequalities probe the classical-quantum boundary by putting limits on the sum of pairwise correlation functions between classical measurement devices that consecutively measured the same quantum system. The apparent violation of the se inequalities by standard quantum measurements has cast doubt on quantum mechanics ability to consistently describe classical objects. Recent work has concluded that these inequalities cannot be violated by either strong or weak projective measurements [1]. Here I consider an entropic version of the Leggett-Garg inequalities that are different from the standard inequalities yet similar in form, and can be defined without reference to any particular observable. I find that the entropic inequalities also cannot be be violated by strong quantum measurements. The entropic inequalities can be extended to describe weak quantum measurements, and I show that these weak entropic Leggett-Garg inequalities cannot be violated either even though the quantum system remains unprojected, because the inequalities describe the classical measurement devices, not the quantum system. I conclude that quantum mechanics adequately describes classical devices, and that we should be careful not to assume that the classical devices accurately describe the quantum system.
119 - Christoph Adami 2019
Leggett and Garg derived inequalities that probe the boundaries of classical and quantum physics by putting limits on the properties that classical objects can have. Historically, it has been suggested that Leggett-Garg inequalities are easily violat ed by quantum systems undergoing sequences of strong measurements, casting doubt on whether quantum mechanics correctly describes macroscopic objects. Here I show that Leggett-Garg inequalities cannot be violated by any projective measurement. The perceived violation of the inequalities found previously can be traced back to an inappropriate assumption of non-invasive measurability. Surprisingly, weak projective measurements cannot violate the Leggett-Garg inequalities either because even though the quantum system itself is not fully projected via weak measurements, the measurement devices are.
To infer information flow in any network of agents, it is important first and foremost to establish causal temporal relations between the nodes. Practical and automated methods that can infer causality are difficult to find, and the subject of ongoin g research. While Shannon information only detects correlation, there are several information-theoretic notions of directed information that have successfully detected causality in some systems, in particular in the neuroscience community. However, recent work has shown that some directed information measures can sometimes inadequately estimate the extent of causal relations, or even fail to identify existing cause-effect relations between components of systems, especially if neurons contribute in a cryptographic manner to influence the effector neuron. Here, we test how often cryptographic logic emerges in an evolutionary process that generates artificial neural circuits for two fundamental cognitive tasks: motion detection and sound localization. We also test whether activity time-series recorded from behaving digital brains can infer information flow using the transfer entropy concept, when compared to a ground-truth model of causal influence constructed from connectivity and circuit logic. Our results suggest that transfer entropy will sometimes fail to infer causality when it exists, and sometimes suggest a causal connection when there is none. However, the extent of incorrect inference strongly depends on the cognitive task considered. These results emphasize the importance of understanding the fundamental logic processes that contribute to information flow in cognitive processing, and quantifying their relevance in any given nervous system.
A central goal of evolutionary biology is to explain the origins and distribution of diversity across life. Beyond species or genetic diversity, we also observe diversity in the circuits (genetic or otherwise) underlying complex functional traits. Ho wever, while the theory behind the origins and maintenance of genetic and species diversity has been studied for decades, theory concerning the origin of diverse functional circuits is still in its infancy. It is not known how many different circuit structures can implement any given function, which evolutionary factors lead to different circuits, and whether the evolution of a particular circuit was due to adaptive or non-adaptive processes. Here, we use digital experimental evolution to study the diversity of neural circuits that encode motion detection in digital (artificial) brains. We find that evolution leads to an enormous diversity of potential neural architectures encoding motion detection circuits, even for circuits encoding the exact same function. Evolved circuits vary in both redundancy and complexity (as previously found in genetic circuits) suggesting that similar evolutionary principles underlie circuit formation using any substrate. We also show that a simple (designed) motion detection circuit that is optimally-adapted gains in complexity when evolved further, and that selection for mutational robustness led this gain in complexity.
62 - Christoph Adami 2017
The present document is an excerpt of an essay that I wrote as part of my application material to graduate school in Computer Science (with a focus on Artificial Intelligence), in 1986. I was not invited by any of the schools that received it, so I b ecame a theoretical physicist instead. The essays full title was Some Topics in Philosophy and Computer Science. I am making this text (unchanged from 1985, preserving the typesetting as much as possible) available now in memory of Jerry Fodor, whose writings had influenced me significantly at the time (even though I did not always agree).
How cooperation can evolve between players is an unsolved problem of biology. Here we use Hamiltonian dynamics of models of the Ising type to describe populations of cooperating and defecting players to show that the equilibrium fraction of cooperato rs is given by the expectation value of a thermal observable akin to a magnetization. We apply the formalism to the Public Goods game with three players, and show that a phase transition between cooperation and defection occurs that is equivalent to a transition in one-dimensional Ising crystals with long-range interactions. We then investigate the effect of punishment on cooperation and find that punishment plays the role of a magnetic field that leads to an alignment between players, thus encouraging cooperation. We suggest that a thermal Hamiltonian picture of the evolution of cooperation can generate other insights about the dynamics of evolving groups by mining the rich literature of critical dynamics in low-dimensional spin systems.
Flies that walk in a covered planar arena on straight paths avoid colliding with each other, but which of the two flies stops is not random. High-throughput video observations, coupled with dedicated experiments with controlled robot flies have revea led that flies utilize the type of optic flow on their retina as a determinant of who should stop, a strategy also used by ship captains to determine which of two ships on a collision course should throw engines in reverse. We use digital evolution to test whether this strategy evolves when collision avoidance is the sole penalty. We find that the strategy does indeed evolve in a narrow range of cost/benefit ratios, for experiments in which the regressive motion cue is error free. We speculate that these stringent conditions may not be sufficient to evolve the strategy in real flies, pointing perhaps to auxiliary costs and benefits not modeled in our study
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا