No Arabic abstract
A popular numerical method to compute SOS (sum of squares of polynomials) decompositions for polynomials is to transform the problem into semi-definite programming (SDP) problems and then solve them by SDP solvers. In this paper, we focus on reducing the sizes of inputs to SDP solvers to improve the efficiency and reliability of those SDP based methods. Two types of polynomials, convex cover polynomials and split polynomials, are defined. A convex cover polynomial or a split polynomial can be decomposed into several smaller sub-polynomials such that the original polynomial is SOS if and only if the sub-polynomials are all SOS. Thus the original SOS problem can be decomposed equivalently into smaller sub-problems. It is proved that convex cover polynomials are split polynomials and it is quite possible that sparse polynomials with many variables are split polynomials, which can be efficiently detected in practice. Some necessary conditions for polynomials to be SOS are also given, which can help refute quickly those polynomials which have no SOS representations so that SDP solvers are not called in this case. All the new results lead to a new SDP based method to compute SOS decompositions, which improves this kind of methods by passing smaller inputs to SDP solvers in some cases. Experiments show that the number of monomials obtained by our program is often smaller than that by other SDP based software, especially for polynomials with many variables and high degrees. Numerical results on various tests are reported to show the performance of our program.
The max-cut problem is a classical graph theory problem which is NP-complete. The best polynomial time approximation scheme relies on emph{semidefinite programming} (SDP). We study the conditions under which graphs of certain classes have rank~1 solutions to the max-cut SDP. We apply these findings to look at how solutions to the max-cut SDP behave under simple combinatorial constructions. Our results determine when solutions to the max-cut SDP for cycle graphs are rank~1. We find the solutions to the max-cut SDP of the vertex~sum of two graphs. We then characterize the SDP solutions upon joining two triangle graphs by an edge~sum.
We consider the global minimization of a polynomial on a compact set B. We show that each step of the Moment-SOS hierarchy has a nice and simple interpretation that complements the usual one. Namely, it computes coefficients of a polynomial in an orthonormal basis of L 2 (B, $mu$) where $mu$ is an arbitrary reference measure whose support is exactly B. The resulting polynomial is a certain density (with respect to $mu$) of some signed measure on B. When some relaxation is exact (which generically takes place) the coefficients of the optimal polynomial density are values of orthonormal polynomials at the global minimizer and the optimal (signed) density is simply related to the Christoffel-Darboux (CD) kernel and the Christoffel function associated with $mu$. In contrast to the hierarchy of upper bounds which computes positive densities, the global optimum can be achieved exactly as integration against a polynomial (signed) density because the CD-kernel is a reproducing kernel, and so can mimic a Dirac measure (as long as finitely many moments are concerned).
In a series of four papers we prove the following relaxation of the Loebl-Komlos-Sos Conjecture: For every $alpha>0$ there exists a number $k_0$ such that for every $k>k_0$ every $n$-vertex graph $G$ with at least $(frac12+alpha)n$ vertices of degree at least $(1+alpha)k$ contains each tree $T$ of order $k$ as a subgraph. The method to prove our result follows a strategy similar to approaches that employ the Szemeredi regularity lemma: we decompose the graph $G$, find a suitable combinatorial structure inside the decomposition, and then embed the tree $T$ into $G$ using this structure. Since for sparse graphs $G$, the decomposition given by the regularity lemma is not helpful, we use a more general decomposition technique. We show that each graph can be decomposed into vertices of huge degree, regular pairs (in the sense of the regularity lemma), and two other objects each exhibiting certain expansion properties. In this paper, we introduce this novel decomposition technique. In the three follow-up papers, we find a combinatorial structure suitable inside the decomposition, which we then use for embedding the tree.
We show {it semidefinite programming} (SDP) feasibility problem is equivalent to solving a {it convex hull relaxation} (CHR) for a finite system of quadratic equations. On the one hand, this offers a simple description of SDP. On the other hand, this equivalence makes it possible to describe a version of the {it Triangle Algorithm} for SDP feasibility based on solving CHR. Specifically, the Triangle Algorithm either computes an approximation to the least-norm feasible solution of SDP, or using its {it distance duality}, provides a separation when no solution within a prescribed norm exists. The worst-case complexity of each iteration is computing the largest eigenvalue of a symmetric matrix arising in that iteration. Alternate complexity bounds on the total number of iterations can be derived. The Triangle Algorithm thus provides an alternative to the existing interior-point algorithms for SDP feasibility and SDP optimization. In particular, based on a preliminary computational result, we can efficiently solve SDP relaxation of {it binary quadratic} feasibility via the Triangle Algorithm. This finds application in solving SDP relaxation of MAX-CUT. We also show in the case of testing the feasibility of a system of convex quadratic inequalities, the problem is reducible to a corresponding CHR, where the worst-case complexity of each iteration via the Triangle Algorithm is solving a {it trust region subproblem}. Gaining from these results, we discuss potential extension of CHR and the Triangle Algorithm to solving general system of polynomial equations.
Sparse optimization is a central problem in machine learning and computer vision. However, this problem is inherently NP-hard and thus difficult to solve in general. Combinatorial search methods find the global optimal solution but are confined to small-sized problems, while coordinate descent methods are efficient but often suffer from poor local minima. This paper considers a new block decomposition algorithm that combines the effectiveness of combinatorial search methods and the efficiency of coordinate descent methods. Specifically, we consider a random strategy or/and a greedy strategy to select a subset of coordinates as the working set, and then perform a global combinatorial search over the working set based on the original objective function. We show that our method finds stronger stationary points than Amir Beck et al.s coordinate-wise optimization method. In addition, we establish the convergence rate of our algorithm. Our experiments on solving sparse regularized and sparsity constrained least squares optimization problems demonstrate that our method achieves state-of-the-art performance in terms of accuracy. For example, our method generally outperforms the well-known greedy pursuit method.