Do you want to publish a course? Click here

Faster Algorithms for Max-Product Message-Passing

144   0   0.0 ( 0 )
 Added by Julian McAuley
 Publication date 2009
and research's language is English




Ask ChatGPT about the research

Maximum A Posteriori inference in graphical models is often solved via message-passing algorithms, such as the junction-tree algorithm, or loopy belief-propagation. The exact solution to this problem is well known to be exponential in the size of the models maximal cliques after it is triangulated, while approximate inference is typically exponential in the size of the models factors. In this paper, we take advantage of the fact that many models have maximal cliques that are larger than their constituent factors, and also of the fact that many factors consist entirely of latent variables (i.e., they do not depend on an observation). This is a common case in a wide variety of applications, including grids, trees, and ring-structured models. In such cases, we are able to decrease the exponent of complexity for message-passing by 0.5 for both exact and approximate inference.



rate research

Read More

183 - Tamir Hazan , Amnon Shashua 2009
In this paper we treat both forms of probabilistic inference, estimating marginal probabilities of the joint distribution and finding the most probable assignment, through a unified message-passing algorithm architecture. We generalize the Belief Propagation (BP) algorithms of sum-product and max-product and tree-rewaighted (TRW) sum and max product algorithms (TRBP) and introduce a new set of convergent algorithms based on convex-free-energy and Linear-Programming (LP) relaxation as a zero-temprature of a convex-free-energy. The main idea of this work arises from taking a general perspective on the existing BP and TRBP algorithms while observing that they all are reductions from the basic optimization formula of $f + sum_i h_i$ where the function $f$ is an extended-valued, strictly convex but non-smooth and the functions $h_i$ are extended-valued functions (not necessarily convex). We use tools from convex duality to present the primal-dual ascent algorithm which is an extension of the Bregman successive projection scheme and is designed to handle optimization of the general type $f + sum_i h_i$. Mapping the fractional-free-energy variational principle to this framework introduces the norm-product message-passing. Special cases include sum-product and max-product (BP algorithms) and the TRBP algorithms. When the fractional-free-energy is set to be convex (convex-free-energy) the norm-product is globally convergent for estimating of marginal probabilities and for approximating the LP-relaxation. We also introduce another branch of the norm-product, the convex-max-product. The convex-max-product is convergent (unlike max-product) and aims at solving the LP-relaxation.
Network alignment generalizes and unifies several approaches for forming a matching or alignment between the vertices of two graphs. We study a mathematical programming framework for network alignment problem and a sparse variation of it where only a small number of matches between the vertices of the two graphs are possible. We propose a new message passing algorithm that allows us to compute, very efficiently, approximate solutions to the sparse network alignment problems with graph sizes as large as hundreds of thousands of vertices. We also provide extensive simulations comparing our algorithms with two of the best solvers for network alignment problems on two synthetic matching problems, two bioinformatics problems, and three large ontology alignment problems including a multilingual problem with a known labeled alignment.
217 - Wei-Kuo Chen , Wai-Kit Lam 2020
We consider a broad class of Approximate Message Passing (AMP) algorithms defined as a Lipschitzian functional iteration in terms of an $ntimes n$ random symmetric matrix $A$. We establish universality in noise for this AMP in the $n$-limit and validate this behavior in a number of AMPs popularly adapted in compressed sensing, statistical inferences, and optimizations in spin glasses.
We propose a novel binary message passing decoding algorithm for product-like codes based on bounded distance decoding (BDD) of the component codes. The algorithm, dubbed iterative BDD with scaled reliability (iBDD-SR), exploits the channel reliabilities and is therefore soft in nature. However, the messages exchanged by the component decoders are binary (hard) messages, which significantly reduces the decoder data flow. The exchanged binary messages are obtained by combining the channel reliability with the BDD decoder output reliabilities, properly conveyed by a scaling factor applied to the BDD decisions. We perform a density evolution analysis for generalized low-density parity-check (GLDPC) code ensembles and spatially coupled GLDPC code ensembles, from which the scaling factors of the iBDD-SR for product and staircase codes, respectively, can be obtained. For the white additive Gaussian noise channel, we show performance gains up to $0.29$ dB and $0.31$ dB for product and staircase codes compared to conventional iterative BDD (iBDD) with the same decoder data flow. Furthermore, we show that iBDD-SR approaches the performance of ideal iBDD that prevents miscorrections.
We improve the running times of $O(1)$-approximation algorithms for the set cover problem in geometric settings, specifically, covering points by disks in the plane, or covering points by halfspaces in three dimensions. In the unweighted case, Agarwal and Pan [SoCG 2014] gave a randomized $O(nlog^4 n)$-time, $O(1)$-approximation algorithm, by using variants of the multiplicative weight update (MWU) method combined with geometric data structures. We simplify the data structure requirement in one of their methods and obtain a deterministic $O(nlog^3 nloglog n)$-time algorithm. With further new ideas, we obtain a still faster randomized $O(nlog n(loglog n)^{O(1)})$-time algorithm. For the weighted problem, we also give a randomized $O(nlog^4nloglog n)$-time, $O(1)$-approximation algorithm, by simple modifications to the MWU method and the quasi-uniform sampling technique.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا