Do you want to publish a course? Click here

Functional importance of noise in neuronal information processing

153   0   0.0 ( 0 )
 Added by Matjaz Perc
 Publication date 2018
  fields Biology Physics
and research's language is English




Ask ChatGPT about the research

Noise is an inherent part of neuronal dynamics, and thus of the brain. It can be observed in neuronal activity at different spatiotemporal scales, including in neuronal membrane potentials, local field potentials, electroencephalography, and magnetoencephalography. A central research topic in contemporary neuroscience is to elucidate the functional role of noise in neuronal information processing. Experimental studies have shown that a suitable level of noise may enhance the detection of weak neuronal signals by means of stochastic resonance. In response, theoretical research, based on the theory of stochastic processes, nonlinear dynamics, and statistical physics, has made great strides in elucidating the mechanism and the many benefits of stochastic resonance in neuronal systems. In this perspective, we review recent research dedicated to neuronal stochastic resonance in biophysical mathematical models. We also explore the regulation of neuronal stochastic resonance, and we outline important open questions and directions for future research. A deeper understanding of neuronal stochastic resonance may afford us new insights into the highly impressive information processing in the brain.

rate research

Read More

Rhythmic activity has been associated with a wide range of cognitive processes. Previous studies have shown that spike-timing-dependent plasticity can facilitate the transfer of rhythmic activity downstream the information processing pathway. However, STDP has also been known to generate strong winner-take-all like competitions between subgroups of correlated synaptic inputs. Consequently, one might expect that STDP would induce strong competition between different rhythmicity channels thus preventing the multiplexing of information across different frequency channels. This study explored whether STDP facilitates the multiplexing of information across multiple frequency channels, and if so, under what conditions. We investigated the STDP dynamics in the framework of a model consisting of two competing subpopulations of neurons that synapse in a feedforward manner onto a single postsynaptic neuron. Each sub-population was assumed to oscillate in an independent manner and in a different frequency band. To investigate the STDP dynamics, a mean field Fokker-Planck theory was developed in the limit of the slow learning rate. Surprisingly, our theory predicted limited interactions between the different sub-groups. Our analysis further revealed that the interaction between these channels was mainly mediated by the shared component of the mean activity. Next, we generalized these results beyond the simplistic model using numerical simulations. We found that for a wide range of parameters, the system converged to a solution in which the post-synaptic neuron responded to both rhythms. Nevertheless, all the synaptic weights remained dynamic and did not converge to a fixed point. These findings imply that STDP can support the multiplexing of rhythmic information and demonstrate how functionality can be retained in the face of continuous remodeling of all the synaptic weights.
The brain can be understood as a collection of interacting neuronal oscillators, but the extent to which its sustained activity is due to coupling among brain areas is still unclear. Here we study the joint dynamics of two cortical columns described by Jansen-Rit neural mass models, and show that coupling between the columns gives rise to stochastic initiations of sustained collective activity, which can be interpreted as epileptic events. For large enough coupling strengths, termination of these events results mainly from the emergence of synchronization between the columns, and thus is controlled by coupling instead of noise. Stochastic triggering and noise-independent durations are characteristic of excitable dynamics, and thus we interpret our results in terms of collective excitability.
To infer information flow in any network of agents, it is important first and foremost to establish causal temporal relations between the nodes. Practical and automated methods that can infer causality are difficult to find, and the subject of ongoing research. While Shannon information only detects correlation, there are several information-theoretic notions of directed information that have successfully detected causality in some systems, in particular in the neuroscience community. However, recent work has shown that some directed information measures can sometimes inadequately estimate the extent of causal relations, or even fail to identify existing cause-effect relations between components of systems, especially if neurons contribute in a cryptographic manner to influence the effector neuron. Here, we test how often cryptographic logic emerges in an evolutionary process that generates artificial neural circuits for two fundamental cognitive tasks: motion detection and sound localization. We also test whether activity time-series recorded from behaving digital brains can infer information flow using the transfer entropy concept, when compared to a ground-truth model of causal influence constructed from connectivity and circuit logic. Our results suggest that transfer entropy will sometimes fail to infer causality when it exists, and sometimes suggest a causal connection when there is none. However, the extent of incorrect inference strongly depends on the cognitive task considered. These results emphasize the importance of understanding the fundamental logic processes that contribute to information flow in cognitive processing, and quantifying their relevance in any given nervous system.
Most nervous systems encode information about stimuli in the responding activity of large neuronal networks. This activity often manifests itself as dynamically coordinated sequences of action potentials. Since multiple electrode recordings are now a standard tool in neuroscience research, it is important to have a measure of such network-wide behavioral coordination and information sharing, applicable to multiple neural spike train data. We propose a new statistic, informational coherence, which measures how much better one unit can be predicted by knowing the dynamical state of another. We argue informational coherence is a measure of association and shared information which is superior to traditional pairwise measures of synchronization and correlation. To find the dynamical states, we use a recently-introduced algorithm which reconstructs effective state spaces from stochastic time series. We then extend the pairwise measure to a multivariate analysis of the network by estimating the network multi-information. We illustrate our method by testing it on a detailed model of the transition from gamma to beta rhythms.
Many networks are important because they are substrates for dynamical systems, and their pattern of functional connectivity can itself be dynamic -- they can functionally reorganize, even if their underlying anatomical structure remains fixed. However, the recent rapid progress in discovering the community structure of networks has overwhelmingly focused on that constant anatomical connectivity. In this paper, we lay out the problem of discovering_functional communities_, and describe an approach to doing so. This method combines recent work on measuring information sharing across stochastic networks with an existing and successful community-discovery algorithm for weighted networks. We illustrate it with an application to a large biophysical model of the transition from beta to gamma rhythms in the hippocampus.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا