ترغب بنشر مسار تعليمي؟ اضغط هنا

A spherical Hopfield model

62   0   0.0 ( 0 )
 نشر من قبل Desire Bolle
 تاريخ النشر 2003
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce a spherical Hopfield-type neural network involving neurons and patterns that are continuous variables. We study both the thermodynamics and dynamics of this model. In order to have a retrieval phase a quartic term is added to the Hamiltonian. The thermodynamics of the model is exactly solvable and the results are replica symmetric. A Langevin dynamics leads to a closed set of equations for the order parameters and effective correlation and response function typical for neural networks. The stationary limit corresponds to the thermodynamic results. Numerical calculations illustrate our findings.

قيم البحث

اقرأ أيضاً

We study the recognition capabilities of the Hopfield model with auxiliary hidden layers, which emerge naturally upon a Hubbard-Stratonovich transformation. We show that the recognition capabilities of such a model at zero-temperature outperform thos e of the original Hopfield model, due to a substantial increase of the storage capacity and the lack of a naturally defined basin of attraction. The modified model does not fall abruptly in a regime of complete confusion when memory load exceeds a sharp threshold.
Using the generating functional analysis an exact recursion relation is derived for the time evolution of the effective local field of the fully connected Little-Hopfield model. It is shown that, by leaving out the feedback correlations arising from earlier times in this effective dynamics, one precisely finds the recursion relations usually employed in the signal-to-noise approach. The consequences of this approximation as well as the physics behind it are discussed. In particular, it is pointed out why it is hard to notice the effects, especially for model parameters corresponding to retrieval. Numerical simulations confirm these findings. The signal-to-noise analysis is then extended to include all correlations, making it a full theory for dynamics at the level of the generating functional analysis. The results are applied to the frequently employed extremely diluted (a)symmetric architectures and to sequence processing networks.
We propose a new framework to understand how quantum effects may impact on the dynamics of neural networks. We implement the dynamics of neural networks in terms of Markovian open quantum systems, which allows us to treat thermal and quantum coherent effects on the same footing. In particular, we propose an open quantum generalisation of the celebrated Hopfield neural network, the simplest toy model of associative memory. We determine its phase diagram and show that quantum fluctuations give rise to a qualitatively new non-equilibrium phase. This novel phase is characterised by limit cycles corresponding to high-dimensional stationary manifolds that may be regarded as a generalisation of storage patterns to the quantum domain.
We perform numerical simulations of a long-range spherical spin glass with two and three body interaction terms. We study the gradient descent dynamics and the inherent structures found after a quench from initial conditions, well thermalized at temp erature $T_{in}$. In large systems, the dynamics strictly agrees with the integration of the mean-field dynamical equations. In particular, we confirm the existence of an onset initial temperature, within the liquid phase, below which the energy of the inherent structures undoubtedly depends on $T_{in}$. This behavior is in contrast with that of pure models, where there is a threshold energy that attracts all the initial configurations in the liquid. Our results strengthen the analogy between mean-field spin glass models and supercooled liquids.
Macroscopic spin ensembles possess brain-like features such as non-linearity, plasticity, stochasticity, selfoscillations, and memory effects, and therefore offer opportunities for neuromorphic computing by spintronics devices. Here we propose a phys ical realization of artificial neural networks based on magnetic textures, which can update their weights intrinsically via built-in physical feedback utilizing the plasticity and large number of degrees of freedom of the magnetic domain patterns and without resource-demanding external computations. We demonstrate the idea by simulating the operation of a 4-node Hopfield neural network for pattern recognition.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا