ترغب بنشر مسار تعليمي؟ اضغط هنا

Variational Pseudolikelihood for Regularized Ising Inference

104   0   0.0 ( 0 )
 نشر من قبل Charles Fisher
 تاريخ النشر 2014
والبحث باللغة English
 تأليف Charles K. Fisher




اسأل ChatGPT حول البحث

I propose a variational approach to maximum pseudolikelihood inference of the Ising model. The variational algorithm is more computationally efficient, and does a better job predicting out-of-sample correlations than $L_2$ regularized maximum pseudolikelihood inference as well as mean field and isolated spin pair approximations with pseudocount regularization. The key to the approach is a variational energy that regularizes the inference problem by shrinking the couplings towards zero, while still allowing some large couplings to explain strong correlations. The utility of the variational pseudolikelihood approach is illustrated by training an Ising model to represent the letters A-J using samples of letters from different computer fonts.


قيم البحث

اقرأ أيضاً

We investigate the critical properties of Ising models on a Regularized Apollonian Network (RAN), here defined as a kind of Apollonian Network (AN) in which the connectivity asymmetry associated to its corners is removed. Different choices for the co upling constants between nearest neighbors are considered, and two different order parameters are used to detect the critical behaviour. While ordinary ferromagnetic and anti-ferromagnetic models on RAN do not undergo a phase transition, some anti-ferrimagnetic models show an interesting infinite order transition. All results are obtained by an exact analytical approach based on iterative partial tracing of the Boltzmann factor as intermediate steps for the calculation of the partition function and the order parameters.
Many Imitation and Reinforcement Learning approaches rely on the availability of expert-generated demonstrations for learning policies or value functions from data. Obtaining a reliable distribution of trajectories from motion planners is non-trivial , since it must broadly cover the space of states likely to be encountered during execution while also satisfying task-based constraints. We propose a sampling strategy based on variational inference to generate distributions of feasible, low-cost trajectories for high-dof motion planning tasks. This includes a distributed, particle-based motion planning algorithm which leverages a structured graphical representations for inference over multi-modal posterior distributions. We also make explicit connections to both approximate inference for trajectory optimization and entropy-regularized reinforcement learning.
The Pachinko Allocation Machine (PAM) is a deep topic model that allows representing rich correlation structures among topics by a directed acyclic graph over topics. Because of the flexibility of the model, however, approximate inference is very dif ficult. Perhaps for this reason, only a small number of potential PAM architectures have been explored in the literature. In this paper we present an efficient and flexible amortized variational inference method for PAM, using a deep inference network to parameterize the approximate posterior distribution in a manner similar to the variational autoencoder. Our inference method produces more coherent topics than state-of-art inference methods for PAM while being an order of magnitude faster, which allows exploration of a wider range of PAM architectures than have previously been studied.
Identifying small subsets of features that are relevant for prediction and/or classification tasks is a central problem in machine learning and statistics. The feature selection task is especially important, and computationally difficult, for modern datasets where the number of features can be comparable to, or even exceed, the number of samples. Here, we show that feature selection with Bayesian inference takes a universal form and reduces to calculating the magnetizations of an Ising model, under some mild conditions. Our results exploit the observation that the evidence takes a universal form for strongly-regularizing priors --- priors that have a large effect on the posterior probability even in the infinite data limit. We derive explicit expressions for feature selection for generalized linear models, a large class of statistical techniques that include linear and logistic regression. We illustrate the power of our approach by analyzing feature selection in a logistic regression-based classifier trained to distinguish between the letters B and D in the notMNIST dataset.
Maximum a posteriori (MAP) inference in discrete-valued Markov random fields is a fundamental problem in machine learning that involves identifying the most likely configuration of random variables given a distribution. Due to the difficulty of this combinatorial problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms that are often interpreted as coordinate descent on the dual LP. To achieve more desirable computational properties, a number of methods regularize the LP with an entropy term, leading to a class of smooth message passing algorithms with convergence guarantees. In this paper, we present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient methods. The proposed algorithms incorporate the familiar steps of standard smooth message passing algorithms, which can be viewed as coordinate minimization steps. We show that these accelerated variants achieve faster rates for finding $epsilon$-optimal points of the unregularized problem, and, when the LP is tight, we prove that the proposed algorithms recover the true MAP solution in fewer iterations than standard message passing algorithms.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا