ترغب بنشر مسار تعليمي؟ اضغط هنا

Rounding Semidefinite Programming Hierarchies via Global Correlation

282   0   0.0 ( 0 )
 نشر من قبل Prasad Raghavendra
 تاريخ النشر 2011
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We show a new way to round vector solutions of semidefinite programming (SDP) hierarchies into integral solutions, based on a connection between these hierarchies and the spectrum of the input graph. We demonstrate the utility of our method by providing a new SDP-hierarchy based algorithm for constraint satisfaction problems with 2-variable constraints (2-CSPs). More concretely, we show for every 2-CSP instance I a rounding algorithm for r rounds of the Lasserre SDP hierarchy for I that obtains an integral solution that is at most eps worse than the relaxations value (normalized to lie in [0,1]), as long as r > kcdotrank_{geq theta}(Ins)/poly(e) ;, where k is the alphabet size of I, $theta=poly(e/k)$, and $rank_{geq theta}(Ins)$ denotes the number of eigenvalues larger than $theta$ in the normalized adjacency matrix of the constraint graph of $Ins$. In the case that $Ins$ is a uniquegames instance, the threshold $theta$ is only a polynomial in $e$, and is independent of the alphabet size. Also in this case, we can give a non-trivial bound on the number of rounds for emph{every} instance. In particular our result yields an SDP-hierarchy based algorithm that matches the performance of the recent subexponential algorithm of Arora, Barak and Steurer (FOCS 2010) in the worst case, but runs faster on a natural family of instances, thus further restricting the set of possible hard instances for Khots Unique Games Conjecture. Our algorithm actually requires less than the $n^{O(r)}$ constraints specified by the $r^{th}$ level of the Lasserre hierarchy, and in some cases $r$ rounds of our program can be evaluated in time $2^{O(r)}poly(n)$.



قيم البحث

اقرأ أيضاً

Maximum A posteriori Probability (MAP) inference in graphical models amounts to solving a graph-structured combinatorial optimization problem. Popular inference algorithms such as belief propagation (BP) and generalized belief propagation (GBP) are i ntimately related to linear programming (LP) relaxation within the Sherali-Adams hierarchy. Despite the popularity of these algorithms, it is well understood that the Sum-of-Squares (SOS) hierarchy based on semidefinite programming (SDP) can provide superior guarantees. Unfortunately, SOS relaxations for a graph with $n$ vertices require solving an SDP with $n^{Theta(d)}$ variables where $d$ is the degree in the hierarchy. In practice, for $dge 4$, this approach does not scale beyond a few tens of variables. In this paper, we propose binary SDP relaxations for MAP inference using the SOS hierarchy with two innovations focused on computational efficiency. Firstly, in analogy to BP and its variants, we only introduce decision variables corresponding to contiguous regions in the graphical model. Secondly, we solve the resulting SDP using a non-convex Burer-Monteiro style method, and develop a sequential rounding procedure. We demonstrate that the resulting algorithm can solve problems with tens of thousands of variables within minutes, and outperforms BP and GBP on practical problems such as image denoising and Ising spin glasses. Finally, for specific graph types, we establish a sufficient condition for the tightness of the proposed partial SOS relaxation.
Neural networks (NNs) have been extremely successful across many tasks in machine learning. Quantization of NN weights has become an important topic due to its impact on their energy efficiency, inference time and deployment on hardware. Although pos t-training quantization is well-studied, training optimal quantized NNs involves combinatorial non-convex optimization problems which appear intractable. In this work, we introduce a convex optimization strategy to train quantized NNs with polynomial activations. Our method leverages hidden convexity in two-layer neural networks from the recent literature, semidefinite lifting, and Grothendiecks identity. Surprisingly, we show that certain quantized NN problems can be solved to global optimality in polynomial-time in all relevant parameters via semidefinite relaxations. We present numerical examples to illustrate the effectiveness of our method.
Under the Strong Exponential Time Hypothesis, an integer linear program with $n$ Boolean-valued variables and $m$ equations cannot be solved in $c^n$ time for any constant $c < 2$. If the domain of the variables is relaxed to $[0,1]$, the associated linear program can of course be solved in polynomial time. In this work, we give a natural algorithmic bridging between these extremes of $0$-$1$ and linear programming. Specifically, for any subset (finite union of intervals) $E subset [0,1]$ containing ${0,1}$, we give a random-walk based algorithm with runtime $O_E((2-text{measure}(E))^ntext{poly}(n,m))$ that finds a solution in $E^n$ to any $n$-variable linear program with $m$ constraints that is feasible over ${0,1}^n$. Note that as $E$ expands from ${0,1}$ to $[0,1]$, the runtime improves smoothly from $2^n$ to polynomial. Taking $E = [0,1/k) cup (1-1/k,1]$ in our result yields as a corollary a randomized $(2-2/k)^{n}text{poly}(n)$ time algorithm for $k$-SAT. While our approach has some high level resemblance to Sch{o}nings beautiful algorithm, our general algorithm is based on a more sophisticated random walk that incorporates several new ingredients, such as a multiplicative potential to measure progress, a judicious choice of starting distribution, and a time varying distribution for the evolution of the random walk that is itself computed via an LP at each step (a solution to which is guaranteed based on the minimax theorem). Plugging the LP algorithm into our earlier polymorphic framework yields fast exponential algorithms for any CSP (like $k$-SAT, $1$-in-$3$-SAT, NAE $k$-SAT) that admit so-called `threshold partial polymorphisms.
147 - Troy Lee , Rajat Mittal 2008
The tendency of semidefinite programs to compose perfectly under product has been exploited many times in complexity theory: for example, by Lovasz to determine the Shannon capacity of the pentagon; to show a direct sum theorem for non-deterministic communication complexity and direct product theorems for discrepancy; and in interactive proof systems to show parallel repetition theorems for restricted classes of games. Despite all these examples of product theorems--some going back nearly thirty years--it was only recently that Mittal and Szegedy began to develop a general theory to explain when and why semidefinite programs behave perfectly under product. This theory captured many examples in the literature, but there were also some notable exceptions which it could not explain--namely, an early parallel repetition result of Feige and Lovasz, and a direct product theorem for the discrepancy method of communication complexity by Lee, Shraibman, and Spalek. We extend the theory of Mittal and Szegedy to explain these cases as well. Indeed, to the best of our knowledge, our theory captures all examples of semidefinite product theorems in the literature.
We propose a new hierarchy of semidefinite programming relaxations for inference problems. As test cases, we consider the problem of community detection in block models. The vertices are partitioned into $k$ communities, and a graph is sampled condit ional on a prescribed number of inter- and intra-community edges. The problem of detection, where we are to decide with high probability whether a graph was drawn from this model or the uniform distribution on regular graphs, is conjectured to undergo a computational phase transition at a point called the Kesten-Stigum (KS) threshold. In this work, we consider two models of random graphs namely the well-studied (irregular) stochastic block model and a distribution over random regular graphs well call the Degree Regular Block Model. For both these models, we show that sufficiently high constant levels of our hierarchy can perform detection arbitrarily close to the KS threshold and that our algorithm is robust to up to a linear number of adversarial edge perturbations. Furthermore, in the case of Degree Regular Block Model (DRBM), we show that below the Kesten-Stigum threshold no constant level can do so. In the case of the (irregular) Stochastic Block Model, it is known that efficient algorithms exist all the way down to this threshold, although none are robust to a linear number of adversarial perturbations of the graph when the average degree is small. More importantly, there is little complexity-theoretic evidence that detection is hard below the threshold. In the DRBM with more than two groups, it has not to our knowledge been proven that any algorithm succeeds down to the KS threshold, let alone that one can do so robustly, and there is a similar dearth of evidence for hardness below this point.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا