ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum algorithms and lower bounds for convex optimization

116   0   0.0 ( 0 )
 نشر من قبل Tongyang Li
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

While recent work suggests that quantum computers can speed up the solution of semidefinite programs, little is known about the quantum complexity of more general convex optimization. We present a quantum algorithm that can optimize a convex function over an $n$-dimensional convex body using $tilde{O}(n)$ queries to oracles that evaluate the objective function and determine membership in the convex body. This represents a quadratic improvement over the best-known classical algorithm. We also study limitations on the power of quantum computers for general convex optimization, showing that it requires $tilde{Omega}(sqrt n)$ evaluation queries and $Omega(sqrt{n})$ membership queries.



قيم البحث

اقرأ أيضاً

We study to what extent quantum algorithms can speed up solving convex optimization problems. Following the classical literature we assume access to a convex set via various oracles, and we examine the efficiency of reductions between the different o racles. In particular, we show how a separation oracle can be implemented using $tilde{O}(1)$ quantum queries to a membership oracle, which is an exponential quantum speed-up over the $Omega(n)$ membership queries that are needed classically. We show that a quantum computer can very efficiently compute an approximate subgradient of a convex Lipschitz function. Combining this with a simplification of recent classical work of Lee, Sidford, and Vempala gives our efficient separation oracle. This in turn implies, via a known algorithm, that $tilde{O}(n)$ quantum queries to a membership oracle suffice to implement an optimization oracle (the best known classical upper bound on the number of membership queries is quadratic). We also prove several lower bounds: $Omega(sqrt{n})$ quantum separation (or membership) queries are needed for optimization if the algorithm knows an interior point of the convex set, and $Omega(n)$ quantum separation queries are needed if it does not.
The prior independent framework for algorithm design considers how well an algorithm that does not know the distribution of its inputs approximates the expected performance of the optimal algorithm for this distribution. This paper gives a method tha t is agnostic to problem setting for proving lower bounds on the prior independent approximation factor of any algorithm. The method constructs a correlated distribution over inputs that can be generated both as a distribution over i.i.d. good-for-algorithms distributions and as a distribution over i.i.d. bad-for-algorithms distributions. Prior independent algorithms are upper-bounded by the optimal algorithm for the latter distribution even when the true distribution is the former. Thus, the ratio of the expected performances of the Bayesian optimal algorithms for these two decompositions is a lower bound on the prior independent approximation ratio. The techniques of the paper connect prior independent algorithm design, Yaos Minimax Principle, and information design. We apply this framework to give new lower bounds on several canonical prior independent mechanism design problems.
We lower bound the complexity of finding $epsilon$-stationary points (with gradient norm at most $epsilon$) using stochastic first-order methods. In a well-studied model where algorithms access smooth, potentially non-convex functions through queries to an unbiased stochastic gradient oracle with bounded variance, we prove that (in the worst case) any algorithm requires at least $epsilon^{-4}$ queries to find an $epsilon$ stationary point. The lower bound is tight, and establishes that stochastic gradient descent is minimax optimal in this model. In a more restrictive model where the noisy gradient estimates satisfy a mean-squared smoothness property, we prove a lower bound of $epsilon^{-3}$ queries, establishing the optimality of recently proposed variance reduction techniques.
143 - Kaiyi Ji , Yingbin Liang 2021
Bilevel optimization has recently attracted growing interests due to its wide applications in modern machine learning problems. Although recent studies have characterized the convergence rate for several such popular algorithms, it is still unclear h ow much further these convergence rates can be improved. In this paper, we address this fundamental question from two perspectives. First, we provide the first-known lower complexity bounds of $widetilde{Omega}(frac{1}{sqrt{mu_x}mu_y})$ and $widetilde Omegabig(frac{1}{sqrt{epsilon}}min{frac{1}{mu_y},frac{1}{sqrt{epsilon^{3}}}}big)$ respectively for strongly-convex-strongly-convex and convex-strongly-convex bilevel optimizations. Second, we propose an accelerated bilevel optimizer named AccBiO, for which we provide the first-known complexity bounds without the gradient boundedness assumption (which was made in existing analyses) under the two aforementioned geometries. We also provide significantly tighter upper bounds than the existing complexity when the bounded gradient assumption does hold. We show that AccBiO achieves the optimal results (i.e., the upper and lower bounds match up to logarithmic factors) when the inner-level problem takes a quadratic form with a constant-level condition number. Interestingly, our lower bounds under both geometries are larger than the corresponding optimal complexities of minimax optimization, establishing that bilevel optimization is provably more challenging than minimax optimization.
We study the problem of emph{local search} on a graph. Given a real-valued black-box function f on the graphs vertices, this is the problem of determining a local minimum of f--a vertex v for which f(v) is no more than f evaluated at any of vs neighb ors. In 1983, Aldous gave the first strong lower bounds for the problem, showing that any randomized algorithm requires $Omega(2^{n/2 - o(1)})$ queries to determine a local minima on the n-dimensional hypercube. The next major step forward was not until 2004 when Aaronson, introducing a new method for query complexity bounds, both strengthened this lower bound to $Omega(2^{n/2}/n^2)$ and gave an analogous lower bound on the quantum query complexity. While these bounds are very strong, they are known only for narrow families of graphs (hypercubes and grids). We show how to generalize Aaronsons techniques in order to give randomized (and quantum) lower bounds on the query complexity of local search for the family of vertex-transitive graphs. In particular, we show that for any vertex-transitive graph G of N vertices and diameter d, the randomized and quantum query complexities for local search on G are $Omega(N^{1/2}/dlog N)$ and $Omega(N^{1/4}/sqrt{dlog N})$, respectively.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا