Do you want to publish a course? Click here

An approach to synaptic learning for autonomous motor control

103   0   0.0 ( 0 )
 Publication date 2020
  fields Biology Physics
and research's language is English




Ask ChatGPT about the research

In the realm of motor control, artificial agents cannot match the performance of their biological counterparts. We thus explore a neural control architecture that is both biologically plausible, and capable of fully autonomous learning. The architecture consists of feedback controllers that learn to achieve a desired state by selecting the errors that should drive them. This selection happens through a family of differential Hebbian learning rules that, through interaction with the environment, can learn to control systems where the error responds monotonically to the control signal. We next show that in a more general case, neural reinforcement learning can be coupled with a feedback controller to reduce errors that arise non-monotonically from the control signal. The use of feedback control reduces the complexity of the reinforcement learning problem, because only a desired value must be learned, with the controller handling the details of how it is reached. This makes the function to be learned simpler, potentially allowing to learn more complex actions. We discuss how this approach could be extended to hierarchical architectures.



rate research

Read More

The broad concept of emergence is instrumental in various of the most challenging open scientific questions -- yet, few quantitative theories of what constitutes emergent phenomena have been proposed. This article introduces a formal theory of causal emergence in multivariate systems, which studies the relationship between the dynamics of parts of a system and macroscopic features of interest. Our theory provides a quantitative definition of downward causation, and introduces a complementary modality of emergent behaviour -- which we refer to as causal decoupling. Moreover, the theory allows practical criteria that can be efficiently calculated in large systems, making our framework applicable in a range of scenarios of practical interest. We illustrate our findings in a number of case studies, including Conways Game of Life, Reynolds flocking model, and neural activity as measured by electrocorticography.
A fundamental problem in neuroscience is to understand how sequences of action potentials (spikes) encode information about sensory signals and motor outputs. Although traditional theories of neural coding assume that information is conveyed by the total number of spikes fired (spike rate), recent studies of sensory and motor activity have shown that far more information is carried by the millisecond-scale timing patterns of action potentials (spike timing). However, it is unknown whether or how subtle differences in spike timing drive differences in perception or behavior, leaving it unclear whether the information carried by spike timing actually plays a causal role in brain function. Here we demonstrate how a precise spike timing code is read out downstream by the muscles to control behavior. We provide both correlative and causal evidence to show that the nervous system uses millisecond-scale variations in the timing of spikes within multi-spike patterns to regulate a relatively simple behavior - respiration in the Bengalese finch, a songbird. These findings suggest that a fundamental assumption of current theories of motor coding requires revision, and that significant improvements in applications, such as neural prosthetic devices, can be achieved by using precise spike timing information.
Latency reduction of postsynaptic spikes is a well-known effect of Synaptic Time-Dependent Plasticity. We expand this notion for long postsynaptic spike trains, showing that, for a fixed input spike train, STDP reduces the number of postsynaptic spikes and concentrates the remaining ones. Then we study the consequences of this phenomena in terms of coding, finding that this mechanism improves the neural code by increasing the signal-to-noise ratio and lowering the metabolic costs of frequent stimuli. Finally, we illustrate that the reduction of postsynaptic latencies can lead to the emergence of predictions.
Brain plasticity refers to brains ability to change neuronal connections, as a result of environmental stimuli, new experiences, or damage. In this work, we study the effects of the synaptic delay on both the coupling strengths and synchronisation in a neuronal network with synaptic plasticity. We build a network of Hodgkin-Huxley neurons, where the plasticity is given by the Hebbian rules. We verify that without time delay the excitatory synapses became stronger from the high frequency to low frequency neurons and the inhibitory synapses increases in the opposite way, when the delay is increased the network presents a non-trivial topology. Regarding the synchronisation, only for small values of the synaptic delay this phenomenon is observed.
Synaptic plasticity is the capacity of a preexisting connection between two neurons to change in strength as a function of neural activity. Because synaptic plasticity is the major candidate mechanism for learning and memory, the elucidation of its constituting mechanisms is of crucial importance in many aspects of normal and pathological brain function. In particular, a prominent aspect that remains debated is how the plasticity mechanisms, that encompass a broad spectrum of temporal and spatial scales, come to play together in a concerted fashion. Here we review and discuss evidence that pinpoints to a possible non-neuronal, glial candidate for such orchestration: the regulation of synaptic plasticity by astrocytes.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا