ترغب بنشر مسار تعليمي؟ اضغط هنا

In this work, we study the limitations of the Quantum Approximate Optimization Algorithm (QAOA) through the lens of statistical physics and show that there exists $epsilon > 0$, such that $epsilonlog(n)$ depth QAOA cannot arbitrarily-well approximate the ground state energy of random diluted $k$-spin glasses when $kgeq4$ is even. This is equivalent to the weak approximation resistance of logarithmic depth QAOA to the kxors problem. We further extend the limitation to other boolean constraint satisfaction problems as long as the problem satisfies a combinatorial property called the coupled overlap-gap property (OGP) [Chen et al., Annals of Probability, 47(3), 2019]. As a consequence of our techniques, we confirm a conjecture of Brandao et al. [arXiv:1812.04170, 2018] asserting that the landscape independence of QAOA extends to logarithmic depth---in other words, for every fixed choice of QAOA angle parameters, the algorithm at logarithmic depth performs almost equally well on almost all instances. Our results provide a new way to study the power and limit of QAOA through statistical physics methods and combinatorial properties.
In this work, we initiate the study of the Minimum Circuit Size Problem (MCSP) in the quantum setting. MCSP is a problem to compute the circuit complexity of Boolean functions. It is a fascinating problem in complexity theory -- its hardness is myste rious, and a better understanding of its hardness can have surprising implications to many fields in computer science. We first define and investigate the basic complexity-theoretic properties of minimum quantum circuit size problems for three natural objects: Boolean functions, unitaries, and quantum states. We show that these problems are not trivially in NP but in QCMA (or have QCMA protocols). Next, we explore the relations between the three quantum MCSPs and their variants. We discover that some reductions that are not known for classical MCSP exist for quantum MCSPs for unitaries and states, e.g., search-to-decision reduction and self-reduction. Finally, we systematically generalize results known for classical MCSP to the quantum setting (including quantum cryptography, quantum learning theory, quantum circuit lower bounds, and quantum fine-grained complexity) and also find new connections to tomography and quantum gravity. Due to the fundamental differences between classical and quantum circuits, most of our results require extra care and reveal properties and phenomena unique to the quantum setting. Our findings could be of interest for future studies, and we post several open problems for further exploration along this direction.
We consider the approximability of constraint satisfaction problems in the streaming setting. For every constraint satisfaction problem (CSP) on $n$ variables taking values in ${0,ldots,q-1}$, we prove that improving over the trivial approximability by a factor of $q$ requires $Omega(n)$ space even on instances with $O(n)$ constraints. We also identify a broad subclass of problems for which any improvement over the trivial approximability requires $Omega(n)$ space. The key technical core is an optimal, $q^{-(k-1)}$-inapproximability for the case where every constraint is given by a system of $k-1$ linear equations $bmod; q$ over $k$ variables. Prior to our work, no such hardness was known for an approximation factor less than $1/2$ for any CSP. Our work builds on and extends the work of Kapralov and Krachun (Proc. STOC 2019) who showed a linear lower bound on any non-trivial approximation of the max cut in graphs. This corresponds roughly to the case of Max $k$-LIN-$bmod; q$ with $k=q=2$. Each one of the extensions provides non-trivial technical challenges that we overcome in this work.
A constraint satisfaction problem (CSP), Max-CSP$({cal F})$, is specified by a finite set of constraints ${cal F} subseteq {[q]^k to {0,1}}$ for positive integers $q$ and $k$. An instance of the problem on $n$ variables is given by $m$ applications o f constraints from ${cal F}$ to subsequences of the $n$ variables, and the goal is to find an assignment to the variables that satisfies the maximum number of constraints. In the $(gamma,beta)$-approximation version of the problem for parameters $0 leq beta < gamma leq 1$, the goal is to distinguish instances where at least $gamma$ fraction of the constraints can be satisfied from instances where at most $beta$ fraction of the constraints can be satisfied. In this work we consider the approximability of this problem in the context of streaming algorithms and give a dichotomy result in the dynamic setting, where constraints can be inserted or deleted. Specifically, for every family ${cal F}$ and every $beta < gamma$, we show that either the approximation problem is solvable with polylogarithmic space in the dynamic setting, or not solvable with $o(sqrt{n})$ space. We also establish tight inapproximability results for a broad subclass in the streaming insertion-only setting. Our work builds on, and significantly extends previous work by the authors who consider the special case of Boolean variables ($q=2$), singleton families ($|{cal F}| = 1$) and where constraints may be placed on variables or their negations. Our framework extends non-trivially the previous work allowing us to appeal to richer norm estimation algorithms to get our algorithmic results. For our negative results we introduce new variants of the communication problems studied in the previous work, build new reductions for these problems, and extend the technical parts of previous works.
A Boolean constraint satisfaction problem (CSP), Max-CSP$(f)$, is a maximization problem specified by a constraint $f:{-1,1}^kto{0,1}$. An instance of the problem consists of $m$ constraint applications on $n$ Boolean variables, where each constraint application applies the constraint to $k$ literals chosen from the $n$ variables and their negations. The goal is to compute the maximum number of constraints that can be satisfied by a Boolean assignment to the $n$~variables. In the $(gamma,beta)$-approximation version of the problem for parameters $gamma geq beta in [0,1]$, the goal is to distinguish instances where at least $gamma$ fraction of the constraints can be satisfied from instances where at most $beta$ fraction of the constraints can be satisfied. In this work we consider the approximability of Max-CSP$(f)$ in the (dynamic) streaming setting, where constraints are inserted (and may also be deleted in the dynamic setting) one at a time. We completely characterize the approximability of all Boolean CSPs in the dynamic streaming setting. Specifically, given $f$, $gamma$ and $beta$ we show that either (1) the $(gamma,beta)$-approximation version of Max-CSP$(f)$ has a probabilistic dynamic streaming algorithm using $O(log n)$ space, or (2) for every $varepsilon > 0$ the $(gamma-varepsilon,beta+varepsilon)$-approximation version of Max-CSP$(f)$ requires $Omega(sqrt{n})$ space for probabilistic dynamic streaming algorithms. We also extend previously known results in the insertion-only setting to a wide variety of cases, and in particular the case of $k=2$ where we get a dichotomy and the case when the satisfying assignments of $f$ support a distribution on ${-1,1}^k$ with uniform marginals.
One of the challenges in analyzing a learning algorithm is the circular entanglement between the objective value and the stochastic noise. This is also known as the chicken and egg phenomenon. Traditionally, people tackle this issue with the special structure of the problem and hence the analysis is difficult to generalize. In this paper, we present a general framework for analyzing high-probability bounds for stochastic dynamics in learning algorithms. Our framework composes standard techniques from probability theory to give a streamlined three-step recipe with a general and flexible principle to tackle the chicken and egg problem. We demonstrate the power and the flexibility of our framework by giving unifying analysis for three very different learning problems with both the last iterate and the strong uniform high probability convergence guarantee. The problems are stochastic gradient descent for strongly convex functions, streaming principal component analysis and linear bandit with stochastic gradient descent updates. We either improve or match the state-of-the-art bounds on all three dynamics.
The linear cross-entropy benchmark (Linear XEB) has been used as a test for procedures simulating quantum circuits. Given a quantum circuit $C$ with $n$ inputs and outputs and purported simulator whose output is distributed according to a distributio n $p$ over ${0,1}^n$, the linear XEB fidelity of the simulator is $mathcal{F}_{C}(p) = 2^n mathbb{E}_{x sim p} q_C(x) -1$ where $q_C(x)$ is the probability that $x$ is output from the distribution $C|0^nrangle$. A trivial simulator (e.g., the uniform distribution) satisfies $mathcal{F}_C(p)=0$, while Googles noisy quantum simulation of a 53 qubit circuit $C$ achieved a fidelity value of $(2.24pm0.21)times10^{-3}$ (Arute et. al., Nature19). In this work we give a classical randomized algorithm that for a given circuit $C$ of depth $d$ with Haar random 2-qubit gates achieves in expectation a fidelity value of $Omega(tfrac{n}{L} cdot 15^{-d})$ in running time $textsf{poly}(n,2^L)$. Here $L$ is the size of the emph{light cone} of $C$: the maximum number of input bits that each output bit depends on. In particular, we obtain a polynomial-time algorithm that achieves large fidelity of $omega(1)$ for depth $O(sqrt{log n})$ two-dimensional circuits. To our knowledge, this is the first such result for two dimensional circuits of super-constant depth. Our results can be considered as an evidence that fooling the linear XEB test might be easier than achieving a full simulation of the quantum circuit.
Ojas rule [Oja, Journal of mathematical biology 1982] is a well-known biologically-plausible algorithm using a Hebbian-type synaptic update rule to solve streaming principal component analysis (PCA). Computational neuroscientists have known that this biological version of Ojas rule converges to the top eigenvector of the covariance matrix of the input in the limit. However, prior to this work, it was open to prove any convergence rate guarantee. In this work, we give the first convergence rate analysis for the biological version of Ojas rule in solving streaming PCA. Moreover, our convergence rate matches the information theoretical lower bound up to logarithmic factors and outperforms the state-of-the-art upper bound for streaming PCA. Furthermore, we develop a novel framework inspired by ordinary differential equations (ODE) to analyze general stochastic dynamics. The framework abandons the traditional step-by-step analysis and instead analyzes a stochastic dynamic in one-shot by giving a closed-form solution to the entire dynamic. The one-shot framework allows us to apply stopping time and martingale techniques to have a flexible and precise control on the dynamic. We believe that this general framework is powerful and should lead to effective yet simple analysis for a large class of problems with stochastic dynamics.
We give a quasipolynomial time algorithm for the graph matching problem (also known as noisy or robust graph isomorphism) on correlated random graphs. Specifically, for every $gamma>0$, we give a $n^{O(log n)}$ time algorithm that given a pair of $ga mma$-correlated $G(n,p)$ graphs $G_0,G_1$ with average degree between $n^{varepsilon}$ and $n^{1/153}$ for $varepsilon = o(1)$, recovers the ground truth permutation $piin S_n$ that matches the vertices of $G_0$ to the vertices of $G_n$ in the way that minimizes the number of mismatched edges. We also give a recovery algorithm for a denser regime, and a polynomial-time algorithm for distinguishing between correlated and uncorrelated graphs. Prior work showed that recovery is information-theoretically possible in this model as long the average degree was at least $log n$, but sub-exponential time algorithms were only known in the dense case (i.e., for $p > n^{-o(1)}$). Moreover, Percolation Graph Matching, which is the most common heuristic for this problem, has been shown to require knowledge of $n^{Omega(1)}$ seeds (i.e., input/output pairs of the permutation $pi$) to succeed in this regime. In contrast our algorithms require no seed and succeed for $p$ which is as low as $n^{o(1)-1}$.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا