No Arabic abstract
In low-level sensory systems, it is still unclear how the noisy information collected locally by neurons may give rise to a coherent global percept. This is well demonstrated for the detection of motion in the aperture problem: as luminance of an elongated line is symmetrical along its axis, tangential velocity is ambiguous when measured locally. Here, we develop the hypothesis that motion-based predictive coding is sufficient to infer global motion. Our implementation is based on a context-dependent diffusion of a probabilistic representation of motion. We observe in simulations a progressive solution to the aperture problem similar to physio-logy and behavior. We demonstrate that this solution is the result of two underlying mechanisms. First, we demonstrate the formation of a tracking behavior favoring temporally coherent features independent of their texture. Second, we observe that incoherent features are explained away, while coherent information diffuses progressively to the global scale. Most previous models included ad hoc mechanisms such as end-stopped cells or a selection layer to track specific luminance-based features as necessary conditions to solve the aperture problem. Here, we have proved that motion-based predictive coding, as it is implemented in this functional model, is sufficient to solve the aperture problem. This solution may give insights into the role of prediction underlying a large class of sensory computations.
We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error problem. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations.
Many studies have found evidence that the brain operates at a critical point, a processus known as self-organized criticality. A recent paper found remarkable scalings suggestive of criticality in systems as different as neural cultures, anesthetized or awake brains. We point out here that the diversity of these states would question any claimed role of criticality in information processing. Furthermore, we show that two non-critical systems pass all the tests for criticality, a control that was not provided in the original article. We conclude that such false positives demonstrate that the presence of criticality in the brain is still not proven and that we need better methods that scaling analyses.
The hard problem of consciousness is the question how subjective experience arises from brain matter. I suggest exploring the possibility that quantum physics could be part of the answer. The simultaneous unity and complexity of subjective experience is difficult to understand from a classical physics perspective. In contrast, quantum entanglement is naturally both complex and unified. Moreover the concept of matter is much more subtle in quantum physics compared to classical physics, and quantum computing shows that quantum effects can be useful for information processing. Building on recent progress in quantum technology and neuroscience, I propose a concrete hypothesis as a basis for further investigation, namely that subjective experience is related to the dynamics of a complex entangled state of spins, which is continuously generated and updated through the exchange of photons. Spins in condensed matter systems at room or body temperature can have coherence times in the relevant range for subjective experience (milliseconds to seconds). Photons are well suited for distributing entanglement over macroscopic distances. Neurons emit photons, reactive oxygen species in the mitochondria being likely sources. Opsins, light-sensitive proteins that are plausible single-photon detectors, exist in the brain and are evolutionarily conserved, suggesting that they serve a function. We have recently shown by detailed numerical modeling that axons can plausibly act as photonic waveguides. The oxygen molecule, which has non-zero electronic spin and emits photons, might serve as an interface between photons and spins. The achievable photon rates seem to be more than sufficient to support the bandwidth of subjective experience. The proposed hypothesis raises many interesting experimental and theoretical questions in neuroscience, quantum physics, evolutionary biology, psychophysics, and philosophy.
Choosing an appropriate set of stimuli is essential to characterize the response of a sensory system to a particular functional dimension, such as the eye movement following the motion of a visual scene. Here, we describe a framework to generate random texture movies with controlled information content, i.e., Motion Clouds. These stimuli are defined using a generative model that is based on controlled experimental parametrization. We show that Motion Clouds correspond to dense mixing of localized moving gratings with random positions. Their global envelope is similar to natural-like stimulation with an approximate full-field translation corresponding to a retinal slip. We describe the construction of these stimuli mathematically and propose an open-source Python-based implementation. Examples of the use of this framework are shown. We also propose extensions to other modalities such as color vision, touch, and audition.
The multiple traveling salesman problem (mTSP) is a well-known NP-hard problem with numerous real-world applications. In particular, this work addresses MinMax mTSP, where the objective is to minimize the max tour length (sum of Euclidean distances) among all agents. The mTSP is normally considered as a combinatorial optimization problem, but due to its computational complexity, search-based exact and heuristic algorithms become inefficient as the number of cities increases. Encouraged by the recent developments in deep reinforcement learning (dRL), this work considers the mTSP as a cooperative task and introduces a decentralized attention-based neural network method to solve the MinMax mTSP, named DAN. In DAN, agents learn fully decentralized policies to collaboratively construct a tour, by predicting the future decisions of other agents. Our model relies on the Transformer architecture, and is trained using multi-agent RL with parameter sharing, which provides natural scalability to the numbers of agents and cities. We experimentally demonstrate our model on small- to large-scale mTSP instances, which involve 50 to 1000 cities and 5 to 20 agents, and compare against state-of-the-art baselines. For small-scale problems (fewer than 100 cities), DAN is able to closely match the performance of the best solver available (OR Tools, a meta-heuristic solver) given the same computation time budget. In larger-scale instances, DAN outperforms both conventional and dRL-based solvers, while keeping computation times low, and exhibits enhanced collaboration among agents.