Do you want to publish a course? Click here

LP-based Tractable Subcones of the Semidefinite Plus Nonnegative Cone

57   0   0.0 ( 0 )
 Added by Akiko Yoshise
 Publication date 2016
  fields
and research's language is English




Ask ChatGPT about the research

The authors in a previous paper devised certain subcones of the semidefinite plus nonnegative cone and showed that satisfaction of the requirements for membership of those subcones can be detected by solving linear optimization problems (LPs) with $O(n)$ variables and $O(n^2)$ constraints. They also devised LP-based algorithms for testing copositivity using the subcones. In this paper, they investigate the properties of the subcones in more detail and explore larger subcones of the positive semidefinite plus nonnegative cone whose satisfaction of the requirements for membership can be detected by solving LPs. They introduce a {em semidefinite basis (SD basis)} that is a basis of the space of $n times n$ symmetric matrices consisting of $n(n+1)/2$ symmetric semidefinite matrices. Using the SD basis, they devise two new subcones for which detection can be done by solving LPs with $O(n^2)$ variables and $O(n^2)$ constraints. The new subcones are larger than the ones in the previous paper and inherit their nice properties. The authors also examine the efficiency of those subcones in numerical experiments. The results show that the subcones are promising for testing copositivity as a useful application.



rate research

Read More

We study the problem of maximizing the geometric mean of $d$ low-degree non-negative forms on the real or complex sphere in $n$ variables. We show that this highly non-convex problem is NP-hard even when the forms are quadratic and is equivalent to optimizing a homogeneous polynomial of degree $O(d)$ on the sphere. The standard Sum-of-Squares based convex relaxation for this polynomial optimization problem requires solving a semidefinite program (SDP) of size $n^{O(d)}$, with multiplicative approximation guarantees of $Omega(frac{1}{n})$. We exploit the compact representation of this polynomial to introduce a SDP relaxation of size polynomial in $n$ and $d$, and prove that it achieves a constant factor multiplicative approximation when maximizing the geometric mean of non-negative quadratic forms. We also show that this analysis is asymptotically tight, with a sequence of instances where the gap between the relaxation and true optimum approaches this constant factor as $d rightarrow infty$. Next we propose a series of intermediate relaxations of increasing complexity that interpolate to the full Sum-of-Squares relaxation, as well as a rounding algorithm that finds an approximate solution from the solution of any intermediate relaxation. Finally we show that this approach can be generalized for relaxations of products of non-negative forms of any degree.
We develop techniques to construct a series of sparse polyhedral approximations of the semidefinite cone. Motivated by the semidefinite (SD) bases proposed by Tanaka and Yoshise (2018), we propose a simple expansion of SD bases so as to keep the sparsity of the matrices composing it. We prove that the polyhedral approximation using our expanded SD bases contains the set of all diagonally dominant matrices and is contained in the set of all scaled diagonally dominant matrices. We also prove that the set of all scaled diagonally dominant matrices can be expressed using an infinite number of expanded SD bases. We use our approximations as the initial approximation in cutting plane methods for solving a semidefinite relaxation of the maximum stable set problem. It is found that the proposed methods with expanded SD bases are significantly more efficient than methods using other existing approximations or solving semidefinite relaxation problems directly.
119 - James Saunderson 2017
If $X$ is an $ntimes n$ symmetric matrix, then the directional derivative of $X mapsto det(X)$ in the direction $I$ is the elementary symmetric polynomial of degree $n-1$ in the eigenvalues of $X$. This is a polynomial in the entries of $X$ with the property that it is hyperbolic with respect to the direction $I$. The corresponding hyperbolicity cone is a relaxation of the positive semidefinite (PSD) cone known as the first derivative relaxation (or Renegar derivative) of the PSD cone. A spectrahedal cone is a convex cone that has a representation as the intersection of a subspace with the cone of PSD matrices in some dimension. We show that the first derivative relaxation of the PSD cone is a spectrahedral cone, and give an explicit spectrahedral description of size $binom{n+1}{2}-1$. The construction provides a new explicit example of a hyperbolicity cone that is also a spectrahedron. This is consistent with the generalized Lax conjecture, which conjectures that every hyperbolicity cone is a spectrahedron.
We consider a new and general online resource allocation problem, where the goal is to maximize a function of a positive semidefinite (PSD) matrix with a scalar budget constraint. The problem data arrives online, and the algorithm needs to make an irrevocable decision at each step. Of particular interest are classic experiment design problems in the online setting, with the algorithm deciding whether to allocate budget to each experiment as new experiments become available sequentially. We analyze two greedy primal-dual algorithms and provide bounds on their competitive ratios. Our analysis relies on a smooth surrogate of the objective function that needs to satisfy a new diminishing returns (PSD-DR) property (that its gradient is order-reversing with respect to the PSD cone). Using the representation for monotone maps on the PSD cone given by Lowners theorem, we obtain a convex parametrization of the family of functions satisfying PSD-DR. We then formulate a convex optimization problem to directly optimize our competitive ratio bound over this set. This design problem can be solved offline before the data start arriving. The online algorithm that uses the designed smoothing is tailored to the given cost function, and enjoys a competitive ratio at least as good as our optimized bound. We provide examples of computing the smooth surrogate for D-optimal and A-optimal experiment design, and demonstrate the performance of the custom-designed algorithm.
The matrix logarithm, when applied to Hermitian positive definite matrices, is concave with respect to the positive semidefinite order. This operator concavity property leads to numerous concavity and convexity results for other matrix functions, many of which are of importance in quantum information theory. In this paper we show how to approximate the matrix logarithm with functions that preserve operator concavity and can be described using the feasible regions of semidefinite optimization problems of fairly small size. Such approximations allow us to use off-the-shelf semidefinite optimization solvers for convex optimization problems involving the matrix logarithm and related functions, such as the quantum relative entropy. The basic ingredients of our approach apply, beyond the matrix logarithm, to functions that are operator concave and operator monotone. As such, we introduce strategies for constructing semidefinite approximations that we expect will be useful, more generally, for studying the approximation power of functions with small semidefinite representations.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا