ترغب بنشر مسار تعليمي؟ اضغط هنا

Restricted Boltzmann Machines with Gaussian Visible Units Guided by Pairwise Constraints

116   0   0.0 ( 0 )
 نشر من قبل Jielei Chu
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Restricted Boltzmann machines (RBMs) and their variants are usually trained by contrastive divergence (CD) learning, but the training procedure is an unsupervised learning approach, without any guidances of the background knowledge. To enhance the expression ability of traditional RBMs, in this paper, we propose pairwise constraints restricted Boltzmann machine with Gaussian visible units (pcGRBM) model, in which the learning procedure is guided by pairwise constraints and the process of encoding is conducted under these guidances. The pairwise constraints are encoded in hidden layer features of pcGRBM. Then, some pairwise hidden features of pcGRBM flock together and another part of them are separated by the guidances. In order to deal with real-valued data, the binary visible units are replaced by linear units with Gausian noise in the pcGRBM model. In the learning process of pcGRBM, the pairwise constraints are iterated transitions between visible and hidden units during CD learning procedure. Then, the proposed model is inferred by approximative gradient descent method and the corresponding learning algorithm is designed in this paper. In order to compare the availability of pcGRBM and traditional RBMs with Gaussian visible units, the features of the pcGRBM and RBMs hidden layer are used as input data for K-means, spectral clustering (SP) and affinity propagation (AP) algorithms, respectively. A thorough experimental evaluation is performed with sixteen image datasets of Microsoft Research Asia Multimedia (MSRA-MM). The experimental results show that the clustering performance of K-means, SP and AP algorithms based on pcGRBM model are significantly better than traditional RBMs. In addition, the pcGRBM model for clustering task shows better performance than some semi-supervised clustering algorithms.



قيم البحث

اقرأ أيضاً

112 - Guido Montufar 2018
The restricted Boltzmann machine is a network of stochastic units with undirected interactions between pairs of visible and hidden units. This model was popularized as a building block of deep learning architectures and has continued to play an impor tant role in applied and theoretical machine learning. Restricted Boltzmann machines carry a rich structure, with connections to geometry, applied algebra, probability, statistics, machine learning, and other areas. The analysis of these models is attractive in its own right and also as a platform to combine and generalize mathematical tools for graphical models with hidden variables. This article gives an introduction to the mathematical analysis of restricted Boltzmann machines, reviews recent results on the geometry of the sets of probability distributions representable by these models, and suggests a few directions for further investigation.
Restricted Boltzmann machines (RBMs) are energy-based neural-networks which are commonly used as the building blocks for deep architectures neural architectures. In this work, we derive a deterministic framework for the training, evaluation, and use of RBMs based upon the Thouless-Anderson-Palmer (TAP) mean-field approximation of widely-connected systems with weak interactions coming from spin-glass theory. While the TAP approach has been extensively studied for fully-visible binary spin systems, our construction is generalized to latent-variable models, as well as to arbitrarily distributed real-valued spin systems with bounded support. In our numerical experiments, we demonstrate the effective deterministic training of our proposed models and are able to show interesting features of unsupervised learning which could not be directly observed with sampling. Additionally, we demonstrate how to utilize our TAP-based framework for leveraging trained RBMs as joint priors in denoising problems.
81 - Yusuke Nomura 2020
The variational wave functions based on neural networks have recently started to be recognized as a powerful ansatz to represent quantum many-body states accurately. In order to show the usefulness of the method among all available numerical methods, it is imperative to investigate the performance in challenging many-body problems for which the exact solutions are not available. Here, we construct a variational wave function with one of the simplest neural networks, the restricted Boltzmann machine (RBM), and apply it to a fundamental but unsolved quantum spin Hamiltonian, the two-dimensional $J_1$-$J_2$ Heisenberg model on the square lattice. We supplement the RBM wave function with quantum-number projections, which restores the symmetry of the wave function and makes it possible to calculate excited states. Then, we perform a systematic investigation of the performance of the RBM. We show that, with the help of the symmetry, the RBM wave function achieves state-of-the-art accuracy both in ground-state and excited-state calculations. The study shows a practical guideline on how we achieve accuracy in a controlled manner.
The search for novel entangled phases of matter has lead to the recent discovery of a new class of ``entanglement transitions, exemplified by random tensor networks and monitored quantum circuits. Most known examples can be understood as some classic al ordering transitions in an underlying statistical mechanics model, where entanglement maps onto the free energy cost of inserting a domain wall. In this paper, we study the possibility of entanglement transitions driven by physics beyond such statistical mechanics mappings. Motivated by recent applications of neural network-inspired variational Ansatze, we investigate under what conditions on the variational parameters these Ansatze can capture an entanglement transition. We study the entanglement scaling of short-range restricted Boltzmann machine (RBM) quantum states with random phases. For uncorrelated random phases, we analytically demonstrate the absence of an entanglement transition and reveal subtle finite size effects in finite size numerical simulations. Introducing phases with correlations decaying as $1/r^alpha$ in real space, we observe three regions with a different scaling of entanglement entropy depending on the exponent $alpha$. We study the nature of the transition between these regions, finding numerical evidence for critical behavior. Our work establishes the presence of long-range correlated phases in RBM-based wave functions as a required ingredient for entanglement transitions.
Restricted Boltzmann Machines (RBMs) and models derived from them have been successfully used as basic building blocks in deep artificial neural networks for automatic features extraction, unsupervised weights initialization, but also as density esti mators. Thus, their generative and discriminative capabilities, but also their computational time are instrumental to a wide range of applications. Our main contribution is to look at RBMs from a topological perspective, bringing insights from network science. Firstly, here we show that RBMs and Gaussian RBMs (GRBMs) are bipartite graphs which naturally have a small-world topology. Secondly, we demonstrate both on synthetic and real-world datasets that by constraining RBMs and GRBMs to a scale-free topology (while still considering local neighborhoods and data distribution), we reduce the number of weights that need to be computed by a few orders of magnitude, at virtually no loss in generative performance. Thirdly, we show that, for a fixed number of weights, our proposed sparse models (which by design have a higher number of hidden neurons) achieve better generative capabilities than standard fully connected RBMs and GRBMs (which by design have a smaller number of hidden neurons), at no additional computational costs.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا