No Arabic abstract
In this paper we study quantum algorithms for NP-complete problems whose best classical algorithm is an exponential time application of dynamic programming. We introduce the path in the hypercube problem that models many of these dynamic programming algorithms. In this problem we are asked whether there is a path from $0^n$ to $1^n$ in a given subgraph of the Boolean hypercube, where the edges are all directed from smaller to larger Hamming weight. We give a quantum algorithm that solves path in the hypercube in time $O^*(1.817^n)$. The technique combines Grovers search with computing a partial dynamic programming table. We use this approach to solve a variety of vertex ordering problems on graphs in the same time $O^*(1.817^n)$, and graph bandwidth in time $O^*(2.946^n)$. Then we use similar ideas to solve the travelling salesman problem and minimum set cover in time $O^*(1.728^n)$.
Constraint programming (CP) is a paradigm used to model and solve constraint satisfaction and combinatorial optimization problems. In CP, problems are modeled with constraints that describe acceptable solutions and solved with backtracking tree search augmented with logical inference. In this paper, we show how quantum algorithms can accelerate CP, at both the levels of inference and search. Leveraging existing quantum algorithms, we introduce a quantum-accelerated filtering algorithm for the $texttt{alldifferent}$ global constraint and discuss its applicability to a broader family of global constraints with similar structure. We propose frameworks for the integration of quantum filtering algorithms within both classical and quantum backtracking search schemes, including a novel hybrid classical-quantum backtracking search method. This work suggests that CP is a promising candidate application for early fault-tolerant quantum computers and beyond.
We study the selective learning problem introduced by Qiao and Valiant (2019), in which the learner observes $n$ labeled data points one at a time. At a time of its choosing, the learner selects a window length $w$ and a model $hatell$ from the model class $mathcal{L}$, and then labels the next $w$ data points using $hatell$. The excess risk incurred by the learner is defined as the difference between the average loss of $hatell$ over those $w$ data points and the smallest possible average loss among all models in $mathcal{L}$ over those $w$ data points. We give an improved algorithm, termed the hybrid exponential weights algorithm, that achieves an expected excess risk of $O((loglog|mathcal{L}| + loglog n)/log n)$. This result gives a doubly exponential improvement in the dependence on $|mathcal{L}|$ over the best known bound of $O(sqrt{|mathcal{L}|/log n})$. We complement the positive result with an almost matching lower bound, which suggests the worst-case optimality of the algorithm. We also study a more restrictive family of learning algorithms that are bounded-recall in the sense that when a prediction window of length $w$ is chosen, the learners decision only depends on the most recent $w$ data points. We analyze an exponential weights variant of the ERM algorithm in Qiao and Valiant (2019). This new algorithm achieves an expected excess risk of $O(sqrt{log |mathcal{L}|/log n})$, which is shown to be nearly optimal among all bounded-recall learners. Our analysis builds on a generalized version of the selective mean prediction problem in Drucker (2013); Qiao and Valiant (2019), which may be of independent interest.
Identifying the best arm of a multi-armed bandit is a central problem in bandit optimization. We study a quantum computational version of this problem with coherent oracle access to states encoding the reward probabilities of each arm as quantum amplitudes. Specifically, we show that we can find the best arm with fixed confidence using $tilde{O}bigl(sqrt{sum_{i=2}^nDelta^{smash{-2}}_i}bigr)$ quantum queries, where $Delta_{i}$ represents the difference between the mean reward of the best arm and the $i^text{th}$-best arm. This algorithm, based on variable-time amplitude amplification and estimation, gives a quadratic speedup compared to the best possible classical result. We also prove a matching quantum lower bound (up to poly-logarithmic factors).
We introduce the online stochastic Convex Programming (CP) problem, a very general version of stochastic online problems which allows arbitrary concave objectives and convex feasibility constraints. Many well-studied problems like online stochastic packing and covering, online stochastic matching with concave returns, etc. form a special case of online stochastic CP. We present fast algorithms for these problems, which achieve near-optimal regret guarantees for both the i.i.d. and the random permutation models of stochastic inputs. When applied to the special case online packing, our ideas yield a simpler and faster primal-dual algorithm for this well studied problem, which achieves the optimal competitive ratio. Our techniques make explicit the connection of primal-dual paradigm and online learning to online stochastic CP.
While recent work suggests that quantum computers can speed up the solution of semidefinite programs, little is known about the quantum complexity of more general convex optimization. We present a quantum algorithm that can optimize a convex function over an $n$-dimensional convex body using $tilde{O}(n)$ queries to oracles that evaluate the objective function and determine membership in the convex body. This represents a quadratic improvement over the best-known classical algorithm. We also study limitations on the power of quantum computers for general convex optimization, showing that it requires $tilde{Omega}(sqrt n)$ evaluation queries and $Omega(sqrt{n})$ membership queries.