Do you want to publish a course? Click here

A Hopfield neural network in magnetic films with natural learning

337   0   0.0 ( 0 )
 Added by Weichao Yu
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Macroscopic spin ensembles possess brain-like features such as non-linearity, plasticity, stochasticity, selfoscillations, and memory effects, and therefore offer opportunities for neuromorphic computing by spintronics devices. Here we propose a physical realization of artificial neural networks based on magnetic textures, which can update their weights intrinsically via built-in physical feedback utilizing the plasticity and large number of degrees of freedom of the magnetic domain patterns and without resource-demanding external computations. We demonstrate the idea by simulating the operation of a 4-node Hopfield neural network for pattern recognition.



rate research

Read More

We propose a new framework to understand how quantum effects may impact on the dynamics of neural networks. We implement the dynamics of neural networks in terms of Markovian open quantum systems, which allows us to treat thermal and quantum coherent effects on the same footing. In particular, we propose an open quantum generalisation of the celebrated Hopfield neural network, the simplest toy model of associative memory. We determine its phase diagram and show that quantum fluctuations give rise to a qualitatively new non-equilibrium phase. This novel phase is characterised by limit cycles corresponding to high-dimensional stationary manifolds that may be regarded as a generalisation of storage patterns to the quantum domain.
We introduce a spherical Hopfield-type neural network involving neurons and patterns that are continuous variables. We study both the thermodynamics and dynamics of this model. In order to have a retrieval phase a quartic term is added to the Hamiltonian. The thermodynamics of the model is exactly solvable and the results are replica symmetric. A Langevin dynamics leads to a closed set of equations for the order parameters and effective correlation and response function typical for neural networks. The stationary limit corresponds to the thermodynamic results. Numerical calculations illustrate our findings.
A universal supervised neural network (NN) relevant to compute the associated criticalities of real experiments studying phase transitions is constructed. The validity of the built NN is examined by applying it to calculate the criticalities of several three-dimensional (3D) models on the cubic lattice, including the classical $O(3)$ model, the 5-state ferromagnetic Potts model, and a dimerized quantum antiferromagnetic Heisenberg model. Particularly, although the considered NN is only trained one time on a one-dimensional (1D) lattice with 120 sites, yet it has successfully determined the related critical points of the studied 3D systems. Moreover, real configurations of states are not used in the testing stage. Instead, the employed configurations for the prediction are constructed on a 1D lattice of 120 sites and are based on the bulk quantities or the microscopic states of the considered models. As a result, our calculations are ultimately efficient in computation and the applications of the built NN is extremely broaden. Considering the fact that the investigated systems vary dramatically from each other, it is amazing that the combination of these two strategies in the training and the testing stages lead to a highly universal supervised neural network for learning phases and criticalities of 3D models. Based on the outcomes presented in this study, it is favorably probable that much simpler but yet elegant machine learning techniques can be constructed for fields of many-body systems other than the critical phenomena.
Many real-world mission-critical applications require continual online learning from noisy data and real-time decision making with a defined confidence level. Probabilistic models and stochastic neural networks can explicitly handle uncertainty in data and allow adaptive learning-on-the-fly, but their implementation in a low-power substrate remains a challenge. Here, we introduce a novel hardware fabric that implements a new class of stochastic NN called Neural-Sampling-Machine that exploits stochasticity in synaptic connections for approximate Bayesian inference. Harnessing the inherent non-linearities and stochasticity occurring at the atomic level in emerging materials and devices allows us to capture the synaptic stochasticity occurring at the molecular level in biological synapses. We experimentally demonstrate in-silico hybrid stochastic synapse by pairing a ferroelectric field-effect transistor -based analog weight cell with a two-terminal stochastic selector element. Such a stochastic synapse can be integrated within the well-established crossbar array architecture for compute-in-memory. We experimentally show that the inherent stochastic switching of the selector element between the insulator and metallic state introduces a multiplicative stochastic noise within the synapses of NSM that samples the conductance states of the FeFET, both during learning and inference. We perform network-level simulations to highlight the salient automatic weight normalization feature introduced by the stochastic synapses of the NSM that paves the way for continual online learning without any offline Batch Normalization. We also showcase the Bayesian inferencing capability introduced by the stochastic synapse during inference mode, thus accounting for uncertainty in data. We report 98.25%accuracy on standard image classification task as well as estimation of data uncertainty in rotated samples.
50 - Do-Hyun Kim , Jinha Park , 2016
The Hopfield model is a pioneering neural network model with associative memory retrieval. The analytical solution of the model in mean field limit revealed that memories can be retrieved without any error up to a finite storage capacity of $O(N)$, where $N$ is the system size. Beyond the threshold, they are completely lost. Since the introduction of the Hopfield model, the theory of neural networks has been further developed toward realistic neural networks using analog neurons, spiking neurons, etc. Nevertheless, those advances are based on fully connected networks, which are inconsistent with recent experimental discovery that the number of connections of each neuron seems to be heterogeneous, following a heavy-tailed distribution. Motivated by this observation, we consider the Hopfield model on scale-free networks and obtain a different pattern of associative memory retrieval from that obtained on the fully connected network: the storage capacity becomes tremendously enhanced but with some error in the memory retrieval, which appears as the heterogeneity of the connections is increased. Moreover, the error rates are also obtained on several real neural networks and are indeed similar to that on scale-free model networks.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا