ترغب بنشر مسار تعليمي؟ اضغط هنا

Exponential lower bounds on spectrahedral representations of hyperbolicity cones

82   0   0.0 ( 0 )
 نشر من قبل Nikhil Srivastava
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The Generalized Lax Conjecture asks whether every hyperbolicity cone is a section of a semidefinite cone of sufficiently high dimension. We prove that the space of hyperbolicity cones of hyperbolic polynomials of degree $d$ in $n$ variables contains $(n/d)^{Omega(d)}$ pairwise distant cones in a certain metric, and therefore that any semidefinite representation of such cones must have dimension at least $(n/d)^{Omega(d)}$ (even if a small approximation is allowed). The proof contains several ingredients of independent interest, including the identification of a large subspace in which the elementary symmetric polynomials lie in the relative interior of the set of hyperbolic polynomials, and quantitati

قيم البحث

اقرأ أيضاً

Amenability is a notion of facial exposedness for convex cones that is stronger than being facially dual complete (or nice) which is, in turn, stronger than merely being facially exposed. Hyperbolicity cones are a family of algebraically structured c losed convex cones that contain all spectrahedra (linear sections of positive semidefinite cones) as special cases. It is known that all spectrahedra are amenable. We establish that all hyperbolicity cones are amenable. As part of the argument, we show that any face of a hyperbolicity cone is a hyperbolicity cone. As a corollary, we show that the intersection of two hyperbolicity cones, not necessarily sharing a common relative interior point, is a hyperbolicity cone.
84 - Robert Beals 1998
We examine the number T of queries that a quantum network requires to compute several Boolean functions on {0,1}^N in the black-box model. We show that, in the black-box model, the exponential quantum speed-up obtained for partial functions (i.e. pro blems involving a promise on the input) by Deutsch and Jozsa and by Simon cannot be obtained for any total function: if a quantum algorithm computes some total Boolean function f with bounded-error using T black-box queries then there is a classical deterministic algorithm that computes f exactly with O(T^6) queries. We also give asymptotically tight characterizations of T for all symmetric f in the exact, zero-error, and bounded-error settings. Finally, we give new precise bounds for AND, OR, and PARITY. Our results are a quantum extension of the so-called polynomial method, which has been successfully applied in classical complexity theory, and also a quantum extension of results by Nisan about a polynomial relationship between randomized and deterministic decision tree complexity.
This paper studies the problem of detecting the presence of a small dense community planted in a large ErdH{o}s-Renyi random graph $mathcal{G}(N,q)$, where the edge probability within the community exceeds $q$ by a constant factor. Assuming the hardn ess of the planted clique detection problem, we show that the computational complexity of detecting the community exhibits the following phase transition phenomenon: As the graph size $N$ grows and the graph becomes sparser according to $q=N^{-alpha}$, there exists a critical value of $alpha = frac{2}{3}$, below which there exists a computationally intensive procedure that can detect far smaller communities than any computationally efficient procedure, and above which a linear-time procedure is statistically optimal. The results also lead to the average-case hardness results for recovering the dense community and approximating the densest $K$-subgraph.
We consider a range of simply stated dynamic data structure problems on strings. An update changes one symbol in the input and a query asks us to compute some function of the pattern of length $m$ and a substring of a longer text. We give both condit ional and unconditional lower bounds for variants of exact matching with wildcards, inner product, and Hamming distance computation via a sequence of reductions. As an example, we show that there does not exist an $O(m^{1/2-varepsilon})$ time algorithm for a large range of these problems unless the online Boolean matrix-vector multiplication conjecture is false. We also provide nearly matching upper bounds for most of the problems we consider.
We give lower bounds on the performance of two of the most popular sampling methods in practice, the Metropolis-adjusted Langevin algorithm (MALA) and multi-step Hamiltonian Monte Carlo (HMC) with a leapfrog integrator, when applied to well-condition ed distributions. Our main result is a nearly-tight lower bound of $widetilde{Omega}(kappa d)$ on the mixing time of MALA from an exponentially warm start, matching a line of algorithmic results up to logarithmic factors and answering an open question of Chewi et. al. We also show that a polynomial dependence on dimension is necessary for the relaxation time of HMC under any number of leapfrog steps, and bound the gains achievable by changing the step count. Our HMC analysis draws upon a novel connection between leapfrog integration and Chebyshev polynomials, which may be of independent interest.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا