Do you want to publish a course? Click here

A Novel Neuron Model of Visual Processor

74   0   0.0 ( 0 )
 Added by Jizhao Liu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Simulating and imitating the neuronal network of humans or mammals is a popular topic that has been explored for many years in the fields of pattern recognition and computer vision. Inspired by neuronal conduction characteristics in the primary visual cortex of cats, pulse-coupled neural networks (PCNNs) can exhibit synchronous oscillation behavior, which can process digital images without training. However, according to the study of single cells in the cat primary visual cortex, when a neuron is stimulated by an external periodic signal, the interspike-interval (ISI) distributions represent a multimodal distribution. This phenomenon cannot be explained by all PCNN models. By analyzing the working mechanism of the PCNN, we present a novel neuron model of the primary visual cortex consisting of a continuous-coupled neural network (CCNN). Our model inherited the threshold exponential decay and synchronous pulse oscillation property of the original PCNN model, and it can exhibit chaotic behavior consistent with the testing results of cat primary visual cortex neurons. Therefore, our CCNN model is closer to real visual neural networks. For image segmentation tasks, the algorithm based on CCNN model has better performance than the state-of-art of visual cortex neural network model. The strength of our approach is that it helps neurophysiologists further understand how the primary visual cortex works and can be used to quantitatively predict the temporal-spatial behavior of real neural networks. CCNN may also inspire engineers to create brain-inspired deep learning networks for artificial intelligence purposes.



rate research

Read More

The input-output behaviour of the Wiener neuronal model subject to alternating input is studied under the assumption that the effect of such an input is to make the drift itself of an alternating type. Firing densities and related statistics are obtained via simulations of the sample-paths of the process in the following three cases: the drift changes occur during random periods characterized by (i) exponential distribution, (ii) Erlang distribution with a preassigned shape parameter, and (iii) deterministic distribution. The obtained results are compared with those holding for the Wiener neuronal model subject to sinusoidal input
During wakefulness and deep sleep brain states, cortical neural networks show a different behavior, with the second characterized by transients of high network activity. To investigate their impact on neuronal behavior, we apply a pairwise Ising model analysis by inferring the maximum entropy model that reproduces single and pairwise moments of the neurons spiking activity. In this work we first review the inference algorithm introduced in Ferrari,Phys. Rev. E (2016). We then succeed in applying the algorithm to infer the model from a large ensemble of neurons recorded by multi-electrode array in human temporal cortex. We compare the Ising model performance in capturing the statistical properties of the network activity during wakefulness and deep sleep. For the latter, the pairwise model misses relevant transients of high network activity, suggesting that additional constraints are necessary to accurately model the data.
114 - Emily Toomey , Ken Segall , 2019
With the rising societal demand for more information-processing capacity with lower power consumption, alternative architectures inspired by the parallelism and robustness of the human brain have recently emerged as possible solutions. In particular, spiking neural networks (SNNs) offer a bio-realistic approach, relying on pulses analogous to action potentials as units of information. While software encoded networks provide flexibility and precision, they are often computationally expensive. As a result, hardware SNNs based on the spiking dynamics of a device or circuit represent an increasingly appealing direction. Here, we propose to use superconducting nanowires as a platform for the development of an artificial neuron. Building on an architecture first proposed for Josephson junctions, we rely on the intrinsic nonlinearity of two coupled nanowires to generate spiking behavior, and use electrothermal circuit simulations to demonstrate that the nanowire neuron reproduces multiple characteristics of biological neurons. Furthermore, by harnessing the nonlinearity of the superconducting nanowires inductance, we develop a design for a variable inductive synapse capable of both excitatory and inhibitory control. We demonstrate that this synapse design supports direct fanout, a feature that has been difficult to achieve in other superconducting architectures, and that the nanowire neurons nominal energy performance is competitive with that of current technologies.
The classical biophysical Morris-Lecar model of neuronal excitability predicts that upon stimulation of the neuron with a sufficiently large constant depolarizing current there exists a finite interval of the current values where periodic spike generation occurs. Above the upper boundary of this interval, there is four-stage damping of the spike amplitude: 1) minor primary damping, which reflects a typical transient to stationary dynamic state, 2) plateau of nearly undamped periodic oscillations, 3) strong damping, and 4) reaching a constant asymptotic value of the neuron potential. We have shown that in the vicinity of the asymptote the Morris-Lecar equations can be reduced to the standard equation for exponentially damped harmonic oscillations. Importantly, all coefficients of this equation can be explicitly expressed through parameters of the original Morris-Lecar model, enabling direct comparison of the numerical and analytical solutions for the neuron potential dynamics at later stages of the spike amplitude damping.
Neuronal dynamics is driven by externally imposed or internally generated random excitations/noise, and is often described by systems of random or stochastic ordinary differential equations. Such systems admit a distribution of solutions, which is (partially) characterized by the single-time joint probability density function (PDF) of system states. It can be used to calculate such information-theoretic quantities as the mutual information between the stochastic stimulus and various internal states of the neuron (e.g., membrane potential), as well as various spiking statistics. When random excitations are modeled as Gaussian white noise, the joint PDF of neuron states satisfies exactly a Fokker-Planck equation. However, most biologically plausible noise sources are correlated (colored). In this case, the resulting PDF equations require a closure approximation. We propose two methods for closing such equations: a modified nonlocal large-eddy-diffusivity closure and a data-driven closure relying on sparse regression to learn relevant features. The closures are tested for the stochastic non-spiking leaky integrate-and-fire and FitzHugh-Nagumo (FHN) neurons driven by sine-Wiener noise. Mutual information and total correlation between the random stimulus and the internal states of the neuron are calculated for the FHN neuron.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا