ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Exponential Sample Complexity of the Quantum State Sign Estimation Problem

141   0   0.0 ( 0 )
 نشر من قبل Arthur Rattew
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

We demonstrate that the ability to estimate the relative sign of an arbitrary $n$-qubit quantum state (with real amplitudes), given only $k$ copies of that state, would yield a $kn$-query algorithm for unstructured search. Thus the quantum sample complexity of sign estimation must be exponential: $Omega(2^{n/2}/n)$. In particular, we show that an efficient procedure for solving the sign estimation problem would allow for a polynomial time solution to the NP-complete problem 3-SAT.



قيم البحث

اقرأ أيضاً

$ ewcommand{eps}{varepsilon} $In learning theory, the VC dimension of a concept class $C$ is the most common way to measure its richness. In the PAC model $$ ThetaBig(frac{d}{eps} + frac{log(1/delta)}{eps}Big) $$ examples are necessary and sufficien t for a learner to output, with probability $1-delta$, a hypothesis $h$ that is $eps$-close to the target concept $c$. In the related agnostic model, where the samples need not come from a $cin C$, we know that $$ ThetaBig(frac{d}{eps^2} + frac{log(1/delta)}{eps^2}Big) $$ examples are necessary and sufficient to output an hypothesis $hin C$ whose error is at most $eps$ worse than the best concept in $C$. Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson, who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atici and Servedio, improved by Zhang, showed that in the PAC setting, quantum examples cannot be much more powerful: the required number of quantum examples is $$ OmegaBig(frac{d^{1-eta}}{eps} + d + frac{log(1/delta)}{eps}Big)mbox{ for all }eta> 0. $$ Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a $log(d/eps)$ factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the Pretty Good Measurement on the quantum state identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors.
We generalize the PAC (probably approximately correct) learning model to the quantum world by generalizing the concepts from classical functions to quantum processes, defining the problem of emph{PAC learning quantum process}, and study its sample co mplexity. In the problem of PAC learning quantum process, we want to learn an $epsilon$-approximate of an unknown quantum process $c^*$ from a known finite concept class $C$ with probability $1-delta$ using samples ${(x_1,c^*(x_1)),(x_2,c^*(x_2)),dots}$, where ${x_1,x_2, dots}$ are computational basis states sampled from an unknown distribution $D$ and ${c^*(x_1),c^*(x_2),dots}$ are the (possibly mixed) quantum states outputted by $c^*$. The special case of PAC-learning quantum process under constant input reduces to a natural problem which we named as approximate state discrimination, where we are given copies of an unknown quantum state $c^*$ from an known finite set $C$, and we want to learn with probability $1-delta$ an $epsilon$-approximate of $c^*$ with as few copies of $c^*$ as possible. We show that the problem of PAC learning quantum process can be solved with $$Oleft(frac{log|C| + log(1/ delta)} { epsilon^2}right)$$ samples when the outputs are pure states and $$Oleft(frac{log^3 |C|(log |C|+log(1/ delta))} { epsilon^2}right)$$ samples if the outputs can be mixed. Some implications of our results are that we can PAC-learn a polynomial sized quantum circuit in polynomial samples and approximate state discrimination can be solved in polynomial samples even when concept class size $|C|$ is exponential in the number of qubits, an exponentially improvement over a full state tomography.
The closest pair problem is a fundamental problem of computational geometry: given a set of $n$ points in a $d$-dimensional space, find a pair with the smallest distance. A classical algorithm taught in introductory courses solves this problem in $O( nlog n)$ time in constant dimensions (i.e., when $d=O(1)$). This paper asks and answers the question of the problems quantum time complexity. Specifically, we give an $tilde{O}(n^{2/3})$ algorithm in constant dimensions, which is optimal up to a polylogarithmic factor by the lower bound on the quantum query complexity of element distinctness. The key to our algorithm is an efficient history-independent data structure that supports quantum interference. In $mathrm{polylog}(n)$ dimensions, no known quantum algorithms perform better than brute force search, with a quadratic speedup provided by Grovers algorithm. To give evidence that the quadratic speedup is nearly optimal, we initiate the study of quantum fine-grained complexity and introduce the Quantum Strong Exponential Time Hypothesis (QSETH), which is based on the assumption that Grovers algorithm is optimal for CNF-SAT when the clause width is large. We show that the na{i}ve Grover approach to closest pair in higher dimensions is optimal up to an $n^{o(1)}$ factor unless QSETH is false. We also study the bichromatic closest pair problem and the orthogonal vectors problem, with broadly similar results.
We study the quantum query complexity of finding a certificate for a d-regular, k-level balanced NAND formula. Up to logarithmic factors, we show that the query complexity is Theta(d^{(k+1)/2}) for 0-certificates, and Theta(d^{k/2}) for 1-certificate s. In particular, this shows that the zero-error quantum query complexity of evaluating such formulas is O(d^{(k+1)/2}) (again neglecting a logarithmic factor). Our lower bound relies on the fact that the quantum adversary method obeys a direct sum theorem.
We identify a formal connection between physical problems related to the detection of separable (unentangled) quantum states and complexity classes in theoretical computer science. In particular, we show that to nearly every quantum interactive proof complexity class (including BQP, QMA, QMA(2), and QSZK), there corresponds a natural separability testing problem that is complete for that class. Of particular interest is the fact that the problem of determining whether an isometry can be made to produce a separable state is either QMA-complete or QMA(2)-complete, depending upon whether the distance between quantum states is measured by the one-way LOCC norm or the trace norm. We obtain strong hardness results by proving that for each n-qubit maximally entangled state there exists a fixed one-way LOCC measurement that distinguishes it from any separable state with error probability that decays exponentially in n.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا