Do you want to publish a course? Click here

The three-state layered neural network with finite dilution

98   0   0.0 ( 0 )
 Added by Rubem Erichsen Jr.
 Publication date 2004
  fields Physics
and research's language is English




Ask ChatGPT about the research

The dynamics and the stationary states of an exactly solvable three-state layered feed-forward neural network model with asymmetric synaptic connections, finite dilution and low pattern activity are studied in extension of a recent work on a recurrent network. Detailed phase diagrams are obtained for the stationary states and for the time evolution of the retrieval overlap with a single pattern. It is shown that the network develops instabilities for low thresholds and that there is a gradual improvement in network performance with increasing threshold up to an optimal stage. The robustness to synaptic noise is checked and the effects of dilution and of variable threshold on the information content of the network are also established.



rate research

Read More

The three-state Ising neural network with synchronous updating and variable dilution is discussed starting from the appropriate Hamiltonians. The thermodynamic and retrieval properties are examined using replica mean-field theory. Capacity-temperature phase diagrams are derived for several values of the pattern activity and different gradations of dilution, and the information content is calculated. The results are compared with those for sequential updating. The effect of self-coupling is established. Also the dynamics is studied using the generating function technique for both synchronous and sequential updating. Typical flow diagrams for the overlap order parameter are presented. The differences with the signal-to-noise approach are outlined.
The thermodynamic and retrieval properties of the Blume-Emery-Griffiths neural network with synchronous updating and variable dilution are studied using replica mean-field theory. Several forms of dilution are allowed by pruning the different types of couplings present in the Hamiltonian. The appearance and properties of two-cycles are discussed. Capacity-temperature phase diagrams are derived for several values of the pattern activity. The results are compared with those for sequential updating. The effect of self-coupling is studied. Furthermore, the optimal combination of dilution parameters giving the largest critical capacity is obtained.
63 - D. Bolle , R. Heylen 2006
The inclusion of a macroscopic adaptive threshold is studied for the retrieval dynamics of layered feedforward neural network models with synaptic noise. It is shown that if the threshold is chosen appropriately as a function of the cross-talk noise and of the activity of the stored patterns, adapting itself automatically in the course of the recall process, an autonomous functioning of the network is guaranteed.This self-control mechanism considerably improves the quality of retrieval, in particular the storage capacity, the basins of attraction and the mutual information content.
60 - D. Bolle , T. Verbeiren 2001
Starting from the mutual information we present a method in order to find a hamiltonian for a fully connected neural network model with an arbitrary, finite number of neuron states, Q. For small initial correlations between the neurons and the patterns it leads to optimal retrieval performance. For binary neurons, Q=2, and biased patterns we recover the Hopfield model. For three-state neurons, Q=3, we find back the recently introduced Blume-Emery-Griffiths network hamiltonian. We derive its phase diagram and compare it with those of related three-state models. We find that the retrieval region is the largest.
It is known that a trained Restricted Boltzmann Machine (RBM) on the binary Monte Carlo Ising spin configurations, generates a series of iterative reconstructed spin configurations which spontaneously flow and stabilize to the critical point of physical system. Here we construct a variety of Neural Network (NN) flows using the RBM and (variational) autoencoders, to study the q-state Potts and clock models on the square lattice for q = 2, 3, 4. The NN are trained on Monte Carlo spin configurations at various temperatures. We find that the trained NN flow does develop a stable point that coincides with critical point of the q-state spin models. The behavior of the NN flow is nontrivial and generative, since the training is unsupervised and without any prior knowledge about the critical point and the Hamiltonian of the underlying spin model. Moreover, we find that the convergence of the flow is independent of the types of NNs and spin models, hinting a universal behavior. Our results strengthen the potential applicability of the notion of the NN flow in studying various states of matter and offer additional evidence on the connection with the Renormalization Group flow.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا