ترغب بنشر مسار تعليمي؟ اضغط هنا

Barriers and Dynamical Paths in Alternating Gibbs Sampling of Restricted Boltzmann Machines

102   0   0.0 ( 0 )
 نشر من قبل Clement Roussel
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Clement Roussel




اسأل ChatGPT حول البحث

Restricted Boltzmann Machines (RBM) are bi-layer neural networks used for the unsupervised learning of model distributions from data. The bipartite architecture of RBM naturally defines an elegant sampling procedure, called Alternating Gibbs Sampling (AGS), where the configurations of the latent-variable layer are sampled conditional to the data-variable layer, and vice versa. We study here the performance of AGS on several analytically tractable models borrowed from statistical mechanics. We show that standard AGS is not more efficient than classical Metropolis-Hastings (MH) sampling of the effective energy landscape defined on the data layer. However, RBM can identify meaningful representations of training data in their latent space. Furthermore, using these representations and combining Gibbs sampling with the MH algorithm in the latent space can enhance the sampling performance of the RBM when the hidden units encode weakly dependent features of the data. We illustrate our findings on three datasets: Bars and Stripes and MNIST, well known in machine learning, and the so-called Lattice Proteins, introduced in theoretical biology to study the sequence-to-structure mapping in proteins.



قيم البحث

اقرأ أيضاً

The search for novel entangled phases of matter has lead to the recent discovery of a new class of ``entanglement transitions, exemplified by random tensor networks and monitored quantum circuits. Most known examples can be understood as some classic al ordering transitions in an underlying statistical mechanics model, where entanglement maps onto the free energy cost of inserting a domain wall. In this paper, we study the possibility of entanglement transitions driven by physics beyond such statistical mechanics mappings. Motivated by recent applications of neural network-inspired variational Ansatze, we investigate under what conditions on the variational parameters these Ansatze can capture an entanglement transition. We study the entanglement scaling of short-range restricted Boltzmann machine (RBM) quantum states with random phases. For uncorrelated random phases, we analytically demonstrate the absence of an entanglement transition and reveal subtle finite size effects in finite size numerical simulations. Introducing phases with correlations decaying as $1/r^alpha$ in real space, we observe three regions with a different scaling of entanglement entropy depending on the exponent $alpha$. We study the nature of the transition between these regions, finding numerical evidence for critical behavior. Our work establishes the presence of long-range correlated phases in RBM-based wave functions as a required ingredient for entanglement transitions.
Restricted Boltzmann machines are undirected neural networks which have been shown to be effective in many applications, including serving as initializations for training deep multi-layer neural networks. One of the main reasons for their success is the existence of efficient and practical stochastic algorithms, such as contrastive divergence, for unsupervised training. We propose an alternative deterministic iterative procedure based on an improved mean field method from statistical physics known as the Thouless-Anderson-Palmer approach. We demonstrate that our algorithm provides performance equal to, and sometimes superior to, persistent contrastive divergence, while also providing a clear and easy to evaluate objective function. We believe that this strategy can be easily generalized to other models as well as to more accurate higher-order approximations, paving the way for systematic improvements in training Boltzmann machines with hidden units.
112 - Guido Montufar 2018
The restricted Boltzmann machine is a network of stochastic units with undirected interactions between pairs of visible and hidden units. This model was popularized as a building block of deep learning architectures and has continued to play an impor tant role in applied and theoretical machine learning. Restricted Boltzmann machines carry a rich structure, with connections to geometry, applied algebra, probability, statistics, machine learning, and other areas. The analysis of these models is attractive in its own right and also as a platform to combine and generalize mathematical tools for graphical models with hidden variables. This article gives an introduction to the mathematical analysis of restricted Boltzmann machines, reviews recent results on the geometry of the sets of probability distributions representable by these models, and suggests a few directions for further investigation.
85 - Jer^ome Tubiana 2016
Extracting automatically the complex set of features composing real high-dimensional data is crucial for achieving high performance in machine--learning tasks. Restricted Boltzmann Machines (RBM) are empirically known to be efficient for this purpose , and to be able to generate distributed and graded representations of the data. We characterize the structural conditions (sparsity of the weights, low effective temperature, nonlinearities in the activation functions of hidden units, and adaptation of fields maintaining the activity in the visible layer) allowing RBM to operate in such a compositional phase. Evidence is provided by the replica analysis of an adequate statistical ensemble of random RBMs and by RBM trained on the handwritten digits dataset MNIST.
Calculating the spectral function of two dimensional systems is arguably one of the most pressing challenges in modern computational condensed matter physics. While efficient techniques are available in lower dimensions, two dimensional systems prese nt insurmountable hurdles, ranging from the sign problem in quantum Monte Carlo (MC), to the entanglement area law in tensor network based methods. We hereby present a variational approach based on a Chebyshev expansion of the spectral function and a neural network representation for the wave functions. The Chebyshev moments are obtained by recursively applying the Hamiltonian and projecting on the space of variational states using a modified natural gradient descent method. We compare this approach with a modified approximation of the spectral function which uses a Krylov subspace constructed from the Chebyshev wave-functions. We present results for the one-dimensional and two-dimensional Heisenberg model on the square lattice, and compare to those obtained by other methods in the literature.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا