ترغب بنشر مسار تعليمي؟ اضغط هنا

Tracking Dynamics of Two-Dimensional Continuous Attractor Neural Networks

108   0   0.0 ( 0 )
 نشر من قبل C.C. Alan Fung
 تاريخ النشر 2009
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce an analytically solvable model of two-dimensional continuous attractor neural networks (CANNs). The synaptic input and the neuronal response form Gaussian bumps in the absence of external stimuli, and enable the network to track external stimuli by its translational displacement in the two-dimensional space. Basis functions of the two-dimensional quantum harmonic oscillator in polar coordinates are introduced to describe the distortion modes of the Gaussian bump. The perturbative method is applied to analyze its dynamics. Testing the method by considering the network behavior when the external stimulus abruptly changes its position, we obtain results of the reaction time and the amplitudes of various distortion modes, with excellent agreement with simulation results.



قيم البحث

اقرأ أيضاً

Understanding how the dynamics of a neural network is shaped by the network structure, and consequently how the network structure facilitates the functions implemented by the neural system, is at the core of using mathematical models to elucidate bra in functions. This study investigates the tracking dynamics of continuous attractor neural networks (CANNs). Due to the translational invariance of neuronal recurrent interactions, CANNs can hold a continuous family of stationary states. They form a continuous manifold in which the neural system is neutrally stable. We systematically explore how this property facilitates the tracking performance of a CANN, which is believed to have clear correspondence with brain functions. By using the wave functions of the quantum harmonic oscillator as the basis, we demonstrate how the dynamics of a CANN is decomposed into different motion modes, corresponding to distortions in the amplitude, position, width or skewness of the network state. We then develop a perturbative approach that utilizes the dominating movement of the networks stationary states in the state space. This method allows us to approximate the network dynamics up to an arbitrary accuracy depending on the order of perturbation used. We quantify the distortions of a Gaussian bump during tracking, and study their effects on the tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable and the reaction time for the network to catch up with an abrupt change in the stimulus.
We investigate the dynamics of continuous attractor neural networks (CANNs). Due to the translational invariance of their neuronal interactions, CANNs can hold a continuous family of stationary states. We systematically explore how their neutral stab ility facilitates the tracking performance of a CANN, which is believed to have wide applications in brain functions. We develop a perturbative approach that utilizes the dominant movement of the network stationary states in the state space. We quantify the distortions of the bump shape during tracking, and study their effects on the tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable, and the reaction time to catch up an abrupt change in stimulus.
206 - Aldo Battista 2019
Recurrent neural networks (RNN) are powerful tools to explain how attractors may emerge from noisy, high-dimensional dynamics. We study here how to learn the ~N^(2) pairwise interactions in a RNN with N neurons to embed L manifolds of dimension D << N. We show that the capacity, i.e. the maximal ratio L/N, decreases as |log(epsilon)|^(-D), where epsilon is the error on the position encoded by the neural activity along each manifold. Hence, RNN are flexible memory devices capable of storing a large number of manifolds at high spatial resolution. Our results rely on a combination of analytical tools from statistical mechanics and random matrix theory, extending Gardners classical theory of learning to the case of patterns with strong spatial correlations.
Attractor models are simplified models used to describe the dynamics of firing rate profiles of a pool of neurons. The firing rate profile, or the neuronal activity, is thought to carry information. Continuous attractor neural networks (CANNs) descri be the neural processing of continuous information such as object position, object orientation and direction of object motion. Recently, it was found that, in one-dimensional CANNs, short-term synaptic depression can destabilize bump-shaped neuronal attractor activity profiles. In this paper, we study two-dimensional CANNs with short-term synaptic depression and with spike frequency adaptation. We found that the dynamics of CANNs with short-term synaptic depression and CANNs with spike frequency adaptation are qualitatively similar. We also found that in both kinds of CANNs the perturbative approach can be used to predict phase diagrams, dynamical variables and speed of spontaneous motion.
371 - Jiangran Wang , Zhuo Chen , Di Luo 2021
We develop a spacetime neural network method with second order optimization for solving quantum dynamics from the high dimensional Schr{o}dinger equation. In contrast to the standard iterative first order optimization and the time-dependent variation al principle, our approach utilizes the implicit mid-point method and generates the solution for all spatial and temporal values simultaneously after optimization. We demonstrate the method in the Schr{o}dinger equation with a self-normalized autoregressive spacetime neural network construction. Future explorations for solving different high dimensional differential equations are discussed.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا