Do you want to publish a course? Click here

Maximizing Products of Linear Forms, and The Permanent of Positive Semidefinite Matrices

70   0   0.0 ( 0 )
 Added by Chenyang Yuan
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We study the convex relaxation of a polynomial optimization problem, maximizing a product of linear forms over the complex sphere. We show that this convex program is also a relaxation of the permanent of Hermitian positive semidefinite (HPSD) matrices. By analyzing a constructive randomized rounding algorithm, we obtain an improved multiplicative approximation factor to the permanent of HPSD matrices, as well as computationally efficient certificates for this approximation. We also propose an analog of van der Waerdens conjecture for HPSD matrices, where the polynomial optimization problem is interpreted as a relaxation of the permanent.



rate research

Read More

We design a deterministic polynomial time $c^n$ approximation algorithm for the permanent of positive semidefinite matrices where $c=e^{gamma+1}simeq 4.84$. We write a natural convex relaxation and show that its optimum solution gives a $c^n$ approximation of the permanent. We further show that this factor is asymptotically tight by constructing a family of positive semidefinite matrices.
We study the problem of maximizing the geometric mean of $d$ low-degree non-negative forms on the real or complex sphere in $n$ variables. We show that this highly non-convex problem is NP-hard even when the forms are quadratic and is equivalent to optimizing a homogeneous polynomial of degree $O(d)$ on the sphere. The standard Sum-of-Squares based convex relaxation for this polynomial optimization problem requires solving a semidefinite program (SDP) of size $n^{O(d)}$, with multiplicative approximation guarantees of $Omega(frac{1}{n})$. We exploit the compact representation of this polynomial to introduce a SDP relaxation of size polynomial in $n$ and $d$, and prove that it achieves a constant factor multiplicative approximation when maximizing the geometric mean of non-negative quadratic forms. We also show that this analysis is asymptotically tight, with a sequence of instances where the gap between the relaxation and true optimum approaches this constant factor as $d rightarrow infty$. Next we propose a series of intermediate relaxations of increasing complexity that interpolate to the full Sum-of-Squares relaxation, as well as a rounding algorithm that finds an approximate solution from the solution of any intermediate relaxation. Finally we show that this approach can be generalized for relaxations of products of non-negative forms of any degree.
We construct a quantum-inspired classical algorithm for computing the permanent of Hermitian positive semidefinite matrices, by exploiting a connection between these mathematical structures and the boson sampling model. Specifically, the permanent of a Hermitian positive semidefinite matrix can be expressed in terms of the expected value of a random variable, which stands for a specific photon-counting probability when measuring a linear-optically evolved random multimode coherent state. Our algorithm then approximates the matrix permanent from the corresponding sample mean and is shown to run in polynomial time for various sets of Hermitian positive semidefinite matrices, achieving a precision that improves over known techniques. This work illustrates how quantum optics may benefit algorithms development.
We consider a new and general online resource allocation problem, where the goal is to maximize a function of a positive semidefinite (PSD) matrix with a scalar budget constraint. The problem data arrives online, and the algorithm needs to make an irrevocable decision at each step. Of particular interest are classic experiment design problems in the online setting, with the algorithm deciding whether to allocate budget to each experiment as new experiments become available sequentially. We analyze two greedy primal-dual algorithms and provide bounds on their competitive ratios. Our analysis relies on a smooth surrogate of the objective function that needs to satisfy a new diminishing returns (PSD-DR) property (that its gradient is order-reversing with respect to the PSD cone). Using the representation for monotone maps on the PSD cone given by Lowners theorem, we obtain a convex parametrization of the family of functions satisfying PSD-DR. We then formulate a convex optimization problem to directly optimize our competitive ratio bound over this set. This design problem can be solved offline before the data start arriving. The online algorithm that uses the designed smoothing is tailored to the given cost function, and enjoys a competitive ratio at least as good as our optimized bound. We provide examples of computing the smooth surrogate for D-optimal and A-optimal experiment design, and demonstrate the performance of the custom-designed algorithm.
117 - Minghua Lin 2016
We bring in some new notions associated with $2times 2$ block positive semidefinite matrices. These notions concern the inequalities between the singular values of the off diagonal blocks and the eigenvalues of the arithmetic mean or geometric mean of the diagonal blocks. We investigate some relations between them. Many examples are included to illustrate these relations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا