ترغب بنشر مسار تعليمي؟ اضغط هنا

Fast counting with tensor networks

77   0   0.0 ( 0 )
 نشر من قبل Stefanos Kourtis
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce tensor network contraction algorithms for counting satisfying assignments of constraint satisfaction problems (#CSPs). We represent each arbitrary #CSP formula as a tensor network, whose full contraction yields the number of satisfying assignments of that formula, and use graph theoretical methods to determine favorable orders of contraction. We employ our heuristics for the solution of #P-hard counting boolean satisfiability (#SAT) problems, namely monotone #1-in-3SAT and #Cubic-Vertex-Cover, and find that they outperform state-of-the-art solvers by a significant margin.

قيم البحث

اقرأ أيضاً

We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017)]. By encoding the truth table of each vertex constraint in a ten sor, the total number of solutions compatible with partial inputs/outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.
Restricted Boltzmann machines (RBM) and deep Boltzmann machines (DBM) are important models in machine learning, and recently found numerous applications in quantum many-body physics. We show that there are fundamental connections between them and ten sor networks. In particular, we demonstrate that any RBM and DBM can be exactly represented as a two-dimensional tensor network. This representation gives an understanding of the expressive power of RBM and DBM using entanglement structures of the tensor networks, also provides an efficient tensor network contraction algorithm for the computing partition function of RBM and DBM. Using numerical experiments, we demonstrate that the proposed algorithm is much more accurate than the state-of-the-art machine learning methods in estimating the partition function of restricted Boltzmann machines and deep Boltzmann machines, and have potential applications in training deep Boltzmann machines for general machine learning tasks.
225 - Jing Liu , Sujie Li , Jiang Zhang 2021
Modeling the joint distribution of high-dimensional data is a central task in unsupervised machine learning. In recent years, many interests have been attracted to developing learning models based on tensor networks, which have advantages of theoreti cal understandings of the expressive power using entanglement properties, and as a bridge connecting the classical computation and the quantum computation. Despite the great potential, however, existing tensor-network-based unsupervised models only work as a proof of principle, as their performances are much worse than the standard models such as the restricted Boltzmann machines and neural networks. In this work, we present the Autoregressive Matrix Product States (AMPS), a tensor-network-based model combining the matrix product states from quantum many-body physics and the autoregressive models from machine learning. The model enjoys exact calculation of normalized probability and unbiased sampling, as well as a clear theoretical understanding of expressive power. We demonstrate the performance of our model using two applications, the generative modeling on synthetic and real-world data, and the reinforcement learning in statistical physics. Using extensive numerical experiments, we show that the proposed model significantly outperforms the existing tensor-network-based models and the restricted Boltzmann machines, and is competitive with the state-of-the-art neural network models.
We investigate the dynamics of a broad class of stochastic copying processes on a network that includes examples from population genetics (spatially-structured Wright-Fisher models), ecology (Hubbell-type models), linguistics (the utterance selection model) and opinion dynamics (the voter model) as special cases. These models all have absorbing states of fixation where all the nodes are in the same state. Earlier studies of these models showed that the mean time when this occurs can be made to grow as different powers of the network size by varying the the degree distribution of the network. Here we demonstrate that this effect can also arise if one varies the asymmetry of the copying dynamics whilst holding the degree distribution constant. In particular, we show that the mean time to fixation can be accelerated even on homogeneous networks when certain nodes are very much more likely to be copied from than copied to. We further show that there is a complex interplay between degree distribution and asymmetry when they may co-vary; and that the results are robust to correlations in the network or the initial condition.
Machine learning (ML) architectures such as convolutional neural networks (CNNs) have garnered considerable recent attention in the study of quantum many-body systems. However, advanced ML approaches such as transfer learning have seldom been applied to such contexts. Here we demonstrate that a simple recurrent unit (SRU) based efficient and transferable sequence learning framework is capable of learning and accurately predicting the time evolution of one-dimensional (1D) Ising model with simultaneous transverse and parallel magnetic fields, as quantitatively corroborated by relative entropy measurements and magnetization between the predicted and exact state distributions. At a cost of constant computational complexity, a larger many-body state evolution was predicted in an autoregressive way from just one initial state, without any guidance or knowledge of any Hamiltonian. Our work paves the way for future applications of advanced ML methods in quantum many-body dynamics only with knowledge from a smaller system.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا