ترغب بنشر مسار تعليمي؟ اضغط هنا

Loopy annealing belief propagation for vertex cover and matching: convergence, LP relaxation, correctness and Bethe approximation

33   0   0.0 ( 0 )
 نشر من قبل Marc Lelarge
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Marc Lelarge




اسأل ChatGPT حول البحث

For the minimum cardinality vertex cover and maximum cardinality matching problems, the max-product form of belief propagation (BP) is known to perform poorly on general graphs. In this paper, we present an iterative loopy annealing BP (LABP) algorithm which is shown to converge and to solve a Linear Programming relaxation of the vertex cover or matching problem on general graphs. LABP finds (asymptotically) a minimum half-integral vertex cover (hence provides a 2-approximation) and a maximum fractional matching on any graph. We also show that LABP finds (asymptotically) a minimum size vertex cover for any bipartite graph and as a consequence compute the matching number of the graph. Our proof relies on some subtle monotonicity arguments for the local iteration. We also show that the Bethe free entropy is concave and that LABP maximizes it. Using loop calculus, we also give an exact (also intractable for general graphs) expression of the partition function for matching in term of the LABP messages which can be used to improve mean-field approximations.

قيم البحث

اقرأ أيضاً

A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say $k$, machines and process the data using limited communication between them. A particularly appealing framework here is the simulta neous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and round-optimal protocol. While many fundamental graph problems admit efficient solutions in this model, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem. Indeed, it was shown recently that for both these problems, even achieving a polylog$(n)$ approximation requires essentially sending the entire input graph from each machine. The main insight of our work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines. We show that when the underlying graph is randomly partitioned across machines, both these problems admit randomized composable coresets of size $widetilde{O}(n)$ that yield an $widetilde{O}(1)$-approximate solution. This results in an $widetilde{O}(1)$-approximation simultaneous protocol for these problems with $widetilde{O}(nk)$ total communication when the input is randomly partitioned across $k$ machines. We further prove the optimality of our results. Finally, by a standard application of composable coresets, our results also imply MapReduce algorithms with the same approximation guarantee in one or two rounds of communication
164 - Soheil Behnezhad 2021
We present a near-tight analysis of the average query complexity -- `a la Nguyen and Onak [FOCS08] -- of the randomized greedy maximal matching algorithm, improving over the bound of Yoshida, Yamamoto and Ito [STOC09]. For any $n$-vertex graph of ave rage degree $bar{d}$, this leads to the following sublinear-time algorithms for estimating the size of maximum matching and minimum vertex cover, all of which are provably time-optimal up to logarithmic factors: $bullet$ A multiplicative $(2+epsilon)$-approximation in $widetilde{O}(n/epsilon^2)$ time using adjacency list queries. This (nearly) matches an $Omega(n)$ time lower bound for any multiplicative approximation and is, notably, the first $O(1)$-approximation that runs in $o(n^{1.5})$ time. $bullet$ A $(2, epsilon n)$-approximation in $widetilde{O}((bar{d} + 1)/epsilon^2)$ time using adjacency list queries. This (nearly) matches an $Omega(bar{d}+1)$ lower bound of Parnas and Ron [TCS07] which holds for any $(O(1), epsilon n)$-approximation, and improves over the bounds of [Yoshida et al. STOC09; Onak et al. SODA12] and [Kapralov et al. SODA20]: The former two take at least quadratic time in the degree which can be as large as $Omega(n^2)$ and the latter obtains a much larger approximation. $bullet$ A $(2, epsilon n)$-approximation in $widetilde{O}(n/epsilon^3)$ time using adjacency matrix queries. This (nearly) matches an $Omega(n)$ time lower bound in this model and improves over the $widetilde{O}(nsqrt{n})$-time $(2, epsilon n)$-approximate algorithm of [Chen, Kannan, and Khanna ICALP20]. It also turns out that any non-trivial multiplicative approximation in the adjacency matrix model requires $Omega(n^2)$ time, so the additive $epsilon n$ error is necessary too. As immediate corollaries, we get improved sublinear time estimators for (variants of) TSP and an improved AMPC algorithm for maximal matching.
We introduce and study two natural generalizations of the Connected VertexCover (VC) problem: the $p$-Edge-Connected and $p$-Vertex-Connected VC problem (where $p geq 2$ is a fixed integer). Like Connected VC, both new VC problems are FPT, but do not admit a polynomial kernel unless $NP subseteq coNP/poly$, which is highly unlikely. We prove however that both problems admit time efficient polynomial sized approximate kernelization schemes. We obtain an $O(2^{O(pk)}n^{O(1)})$-time algorithm for the $p$-Edge-Connected VC and an $O(2^{O(k^2)}n^{O(1)})$-time algorithm for the $p$-Vertex-Connected VC. Finally, we describe a $2(p+1)$-approximation algorithm for the $p$-Edge-Connected VC. The proofs for the new VC problems require more sophisticated arguments than for Connected VC. In particular, for the approximation algorithm we use Gomory-Hu trees and for the approximate kernels a result on small-size spanning $p$-vertex/edge-connected subgraph of a $p$-vertex/edge-connected graph obtained independently by Nishizeki and Poljak (1994) and Nagamochi and Ibaraki (1992).
Threshold graphs are a class of graphs that have many equivalent definitions and have applications in integer programming and set packing problems. A graph is said to have a threshold cover of size $k$ if its edges can be covered using $k$ threshold graphs. Chvatal and Hammer, in 1977, defined the threshold dimension $mathrm{th}(G)$ of a graph $G$ to be the least integer $k$ such that $G$ has a threshold cover of size $k$ and observed that $mathrm{th}(G)geqchi(G^*)$, where $G^*$ is a suitably constructed auxiliary graph. Raschle and Simon~[Proceedings of the Twenty-seventh Annual ACM Symposium on Theory of Computing, STOC 95, pages 650--661, 1995] proved that $mathrm{th}(G)=chi(G^*)$ whenever $G^*$ is bipartite. We show how the lexicographic method of Hell and Huang can be used to obtain a completely new and, we believe, simpler proof for this result. For the case when $G$ is a split graph, our method yields a proof that is much shorter than the ones known in the literature.
Gaussian belief propagation (GaBP) is an iterative message-passing algorithm for inference in Gaussian graphical models. It is known that when GaBP converges it converges to the correct MAP estimate of the Gaussian random vector and simple sufficient conditions for its convergence have been established. In this paper we develop a double-loop algorithm for forcing convergence of GaBP. Our method computes the correct MAP estimate even in cases where standard GaBP would not have converged. We further extend this construction to compute least-squares solutions of over-constrained linear systems. We believe that our construction has numerous applications, since the GaBP algorithm is linked to solution of linear systems of equations, which is a fundamental problem in computer science and engineering. As a case study, we discuss the linear detection problem. We show that using our new construction, we are able to force convergence of Montanaris linear detection algorithm, in cases where it would originally fail. As a consequence, we are able to increase significantly the number of users that can transmit concurrently.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا