ترغب بنشر مسار تعليمي؟ اضغط هنا

Autonomous learning of nonlocal stochastic neuron dynamics

74   0   0.0 ( 0 )
 نشر من قبل Tyler Maltba
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Neuronal dynamics is driven by externally imposed or internally generated random excitations/noise, and is often described by systems of random or stochastic ordinary differential equations. Such systems admit a distribution of solutions, which is (partially) characterized by the single-time joint probability density function (PDF) of system states. It can be used to calculate such information-theoretic quantities as the mutual information between the stochastic stimulus and various internal states of the neuron (e.g., membrane potential), as well as various spiking statistics. When random excitations are modeled as Gaussian white noise, the joint PDF of neuron states satisfies exactly a Fokker-Planck equation. However, most biologically plausible noise sources are correlated (colored). In this case, the resulting PDF equations require a closure approximation. We propose two methods for closing such equations: a modified nonlocal large-eddy-diffusivity closure and a data-driven closure relying on sparse regression to learn relevant features. The closures are tested for the stochastic non-spiking leaky integrate-and-fire and FitzHugh-Nagumo (FHN) neurons driven by sine-Wiener noise. Mutual information and total correlation between the random stimulus and the internal states of the neuron are calculated for the FHN neuron.



قيم البحث

اقرأ أيضاً

We consider a pair of stochastic integrate and fire neurons receiving correlated stochastic inputs. The evolution of this system can be described by the corresponding Fokker-Planck equation with non-trivial boundary conditions resulting from the refr actory period and firing threshold. We propose a finite volume method that is orders of magnitude faster than the Monte Carlo methods traditionally used to model such systems. The resulting numerical approximations are proved to be accurate, nonnegative and integrate to 1. We also approximate the transient evolution of the system using an Ornstein--Uhlenbeck process, and use the result to examine the properties of the joint output of cell pairs. The results suggests that the joint output of a cell pair is most sensitive to changes in input variance, and less sensitive to changes in input mean and correlation.
A main concern in cognitive neuroscience is to decode the overt neural spike train observations and infer latent representations under neural circuits. However, traditional methods entail strong prior on network structure and hardly meet the demand f or real spike data. Here we propose a novel neural network approach called Neuron Activation Network that extracts neural information explicitly from single trial neuron population spike trains. Our proposed method consists of a spatiotemporal learning procedure on sensory environment and a message passing mechanism on population graph, followed by a neuron activation process in a recursive fashion. Our model is aimed to reconstruct neuron information while inferring representations of neuron spiking states. We apply our model to retinal ganglion cells and the experimental results suggest that our model holds a more potent capability in generating neural spike sequences with high fidelity than the state-of-the-art methods, as well as being more expressive and having potential to disclose latent spiking mechanism. The source code will be released with the final paper.
Meditation practices have been claimed to have a positive effect on the regulation of mood and emotion for quite some time by practitioners, and in recent times there has been a sustained effort to provide a more precise description of the changes in duced by meditation on human brain. Longitudinal studies have reported morphological changes in cortical thickness and volume in selected brain regions due to meditation practice, which is interpreted as evidence for effectiveness of it beyond the subjective self reporting. Evidence based on real time monitoring of meditating brain by functional imaging modalities such as MEG or EEG remains a challenge. In this article we consider MEG data collected during meditation sessions of experienced Buddhist monks practicing focused attention (Samatha) and open monitoring (Vipassana) meditation, contrasted by resting state with eyes closed. The MEG data is first mapped to time series of brain activity averaged over brain regions corresponding to a standard Destrieux brain atlas, and further by bootstrapping and spectral analysis to data matrices representing a random sample of power spectral densities over bandwidths corresponding to $alpha$, $beta$, $gamma$, and $theta$ bands in the spectral range. We demonstrate using linear discriminant analysis (LDA) that the samples corresponding to different meditative or resting states contain enough fingerprints of the brain state to allow a separation between different states, and we identify the brain regions that appear to contribute to the separation. Our findings suggest that cingulate cortex, insular cortex and some of the internal structures, most notably accumbens, caudate and putamen nuclei, thalamus and amygdalae stand out as separating regions, which seems to correlate well with earlier findings based on longitudinal studies.
Simulating and imitating the neuronal network of humans or mammals is a popular topic that has been explored for many years in the fields of pattern recognition and computer vision. Inspired by neuronal conduction characteristics in the primary visua l cortex of cats, pulse-coupled neural networks (PCNNs) can exhibit synchronous oscillation behavior, which can process digital images without training. However, according to the study of single cells in the cat primary visual cortex, when a neuron is stimulated by an external periodic signal, the interspike-interval (ISI) distributions represent a multimodal distribution. This phenomenon cannot be explained by all PCNN models. By analyzing the working mechanism of the PCNN, we present a novel neuron model of the primary visual cortex consisting of a continuous-coupled neural network (CCNN). Our model inherited the threshold exponential decay and synchronous pulse oscillation property of the original PCNN model, and it can exhibit chaotic behavior consistent with the testing results of cat primary visual cortex neurons. Therefore, our CCNN model is closer to real visual neural networks. For image segmentation tasks, the algorithm based on CCNN model has better performance than the state-of-art of visual cortex neural network model. The strength of our approach is that it helps neurophysiologists further understand how the primary visual cortex works and can be used to quantitatively predict the temporal-spatial behavior of real neural networks. CCNN may also inspire engineers to create brain-inspired deep learning networks for artificial intelligence purposes.
Nonlocal operators of fractional type are a popular modeling choice for applications that do not adhere to classical diffusive behavior; however, one major challenge in nonlocal simulations is the selection of model parameters. In this work we propos e an optimization-based approach to parameter identification for fractional models with an optional truncation radius. We formulate the inference problem as an optimal control problem where the objective is to minimize the discrepancy between observed data and an approximate solution of the model, and the control variables are the fractional order and the truncation length. For the numerical solution of the minimization problem we propose a gradient-based approach, where we enhance the numerical performance by an approximation of the bilinear form of the state equation and its derivative with respect to the fractional order. Several numerical tests in one and two dimensions illustrate the theoretical results and show the robustness and applicability of our method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا