ترغب بنشر مسار تعليمي؟ اضغط هنا

Boltzmann machines as two-dimensional tensor networks

80   0   0.0 ( 0 )
 نشر من قبل Sujie Li
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Restricted Boltzmann machines (RBM) and deep Boltzmann machines (DBM) are important models in machine learning, and recently found numerous applications in quantum many-body physics. We show that there are fundamental connections between them and tensor networks. In particular, we demonstrate that any RBM and DBM can be exactly represented as a two-dimensional tensor network. This representation gives an understanding of the expressive power of RBM and DBM using entanglement structures of the tensor networks, also provides an efficient tensor network contraction algorithm for the computing partition function of RBM and DBM. Using numerical experiments, we demonstrate that the proposed algorithm is much more accurate than the state-of-the-art machine learning methods in estimating the partition function of restricted Boltzmann machines and deep Boltzmann machines, and have potential applications in training deep Boltzmann machines for general machine learning tasks.



قيم البحث

اقرأ أيضاً

A Boltzmann machine is a stochastic neural network that has been extensively used in the layers of deep architectures for modern machine learning applications. In this paper, we develop a Boltzmann machine that is capable of modelling thermodynamic o bservables for physical systems in thermal equilibrium. Through unsupervised learning, we train the Boltzmann machine on data sets constructed with spin configurations importance-sampled from the partition function of an Ising Hamiltonian at different temperatures using Monte Carlo (MC) methods. The trained Boltzmann machine is then used to generate spin states, for which we compare thermodynamic observables to those computed by direct MC sampling. We demonstrate that the Boltzmann machine can faithfully reproduce the observables of the physical system. Further, we observe that the number of neurons required to obtain accurate results increases as the system is brought close to criticality.
We introduce tensor network contraction algorithms for counting satisfying assignments of constraint satisfaction problems (#CSPs). We represent each arbitrary #CSP formula as a tensor network, whose full contraction yields the number of satisfying a ssignments of that formula, and use graph theoretical methods to determine favorable orders of contraction. We employ our heuristics for the solution of #P-hard counting boolean satisfiability (#SAT) problems, namely monotone #1-in-3SAT and #Cubic-Vertex-Cover, and find that they outperform state-of-the-art solvers by a significant margin.
225 - Jing Liu , Sujie Li , Jiang Zhang 2021
Modeling the joint distribution of high-dimensional data is a central task in unsupervised machine learning. In recent years, many interests have been attracted to developing learning models based on tensor networks, which have advantages of theoreti cal understandings of the expressive power using entanglement properties, and as a bridge connecting the classical computation and the quantum computation. Despite the great potential, however, existing tensor-network-based unsupervised models only work as a proof of principle, as their performances are much worse than the standard models such as the restricted Boltzmann machines and neural networks. In this work, we present the Autoregressive Matrix Product States (AMPS), a tensor-network-based model combining the matrix product states from quantum many-body physics and the autoregressive models from machine learning. The model enjoys exact calculation of normalized probability and unbiased sampling, as well as a clear theoretical understanding of expressive power. We demonstrate the performance of our model using two applications, the generative modeling on synthetic and real-world data, and the reinforcement learning in statistical physics. Using extensive numerical experiments, we show that the proposed model significantly outperforms the existing tensor-network-based models and the restricted Boltzmann machines, and is competitive with the state-of-the-art neural network models.
160 - Marco Nava 2013
In this thesis we have used Quantum Monte Carlo techniques to study two systems that can be regarded as the archetype for neutral strongly interacting systems: 4He, and its fermionic counterpart 3He.More specifically, we have used the Path Integral G round State and the Path Integral Monte Carlo methods to study a system of two dimensional 3He (2d-3He) and a system of 4He adsorbed on Graphene-Fluoride (GF) and Graphane (GH) at both zero and finite temperature. The purpose of the study of 4He on GF (GH) was the research of new physical phenomena, whereas in the case of 2d-3He it was the application of novel methodologies for the ab-initio study of static and dynamic properties of Fermi systems. In the case of 2d-3He we have computed the spin susceptibility as function of density which turned out to be in very good agreement with experimental data; we have also obtained the first ab-initio evaluation of the zero-sound mode and the dynamic structure factor of 2d-3He that is in remarkably good agreement with experiments. In the case of 4He adsorbed on GF (GH), we determined the zero temperature equilibrium density of the first monolayer of 4He showing also that the commensurate sqrt(3) x sqrt(3) R30 phase is unstable on both substrates; at equilibrium density we found that 4He on GF (GH) is a modulated superfluid with an anisotropic phono-rotonic spectrum; at high coverages we found an incommensurate triangular solid and, on both GF and GH, a commensurate phase at filling factor x= 2/7 that is locally stable or at least metastable. Remarkably, in this commensurate solid phase and for both GF and GH, our computations show preliminary evidence of the presence of a superfluid fraction.
Restricted Boltzmann Machines (RBMs) are a class of generative neural network that are typically trained to maximize a log-likelihood objective function. We argue that likelihood-based training strategies may fail because the objective does not suffi ciently penalize models that place a high probability in regions where the training data distribution has low probability. To overcome this problem, we introduce Boltzmann Encoded Adversarial Machines (BEAMs). A BEAM is an RBM trained against an adversary that uses the hidden layer activations of the RBM to discriminate between the training data and the probability distribution generated by the model. We present experiments demonstrating that BEAMs outperform RBMs and GANs on multiple benchmarks.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا