No Arabic abstract
We present a number of results related to quantum algorithms with small error probability and quantum algorithms that are zero-error. First, we give a tight analysis of the trade-offs between the number of queries of quantum search algorithms, their error probability, the size of the search space, and the number of solutions in this space. Using this, we deduce new lower and upper bounds for quant
We define a new query measure we call quantum distinguishing complexity, denoted QD(f) for a Boolean function f. Unlike a quantum query algorithm, which must output a state close to |0> on a 0-input and a state close to |1> on a 1-input, a quantum distinguishing algorithm can output any state, as long as the output states for any 0-input and 1-input are distinguishable. Using this measure, we establish a new relationship in query complexity: For all total functions f, Q_0(f)=O~(Q(f)^5), where Q_0(f) and Q(f) denote the zero-error and bounded-error quantum query complexity of f respectively, improving on the previously known sixth power relationship. We also define a query measure based on quantum statistical zero-knowledge proofs, QSZK(f), which is at most Q(f). We show that QD(f) in fact lower bounds QSZK(f) and not just Q(f). QD(f) also upper bounds the (positive-weights) adversary bound, which yields the following relationships for all f: Q(f) >= QSZK(f) >= QS(f) = Omega(Adv(f)). This sheds some light on why the adversary bound proves suboptimal bounds for problems like Collision and Set Equality, which have low QSZK complexity. Lastly, we show implications for lifting theorems in communication complexity. We show that a general lifting theorem for either zero-error quantum query complexity or for QSZK would imply a general lifting theorem for bounded-error quantum query complexity.
Variational quantum time evolution (VarQTE) allows us to simulate dynamical quantum systems with parameterized quantum circuits. We derive a posteriori, global phase-agnostic error bounds for real and imaginary time evolution based on McLachlans variational principle that can be evaluated efficiently. Rigorous error bounds are crucial in practice to adaptively choose variational circuits and to analyze the quality of optimization algorithms. The power of the new error bounds, as well as, the performance of VarQTE are demonstrated on numerical examples.
The pointer function of G{{o}}{{o}}s, Pitassi and Watson cite{DBLP:journals/eccc/GoosP015a} and its variants have recently been used to prove separation results among various measures of complexity such as deterministic, randomized and quantum query complexities, exact and approximate polynomial degrees, etc. In particular, the widest possible (quadratic) separations between deterministic and zero-error randomized query complexity, as well as between bounded-error and zero-error randomized query complexity, have been obtained by considering {em variants}~cite{DBLP:journals/corr/AmbainisBBL15} of this pointer function. However, as was pointed out in cite{DBLP:journals/corr/AmbainisBBL15}, the precise zero-error complexity of the original pointer function was not known. We show a lower bound of $widetilde{Omega}(n^{3/4})$ on the zero-error randomized query complexity of the pointer function on $Theta(n log n)$ bits; since an $widetilde{O}(n^{3/4})$ upper bound is already known cite{DBLP:conf/fsttcs/MukhopadhyayS15}, our lower bound is optimal up to a factor of $polylog, n$.
We investigate the randomized and quantum communication complexities of the well-studied Equality function with small error probability $epsilon$, getting the optimal constant factors in the leading terms in a number of different models. In the randomized model, 1) we give a general technique to convert public-coin protocols to private-coin protocols by incurring a small multiplicative error, at a small additive cost. This is an improvement over Newmans theorem [Inf. Proc. Let.91] in the dependence on the error parameter. 2) Using this we obtain a $(log(n/epsilon^2)+4)$-cost private-coin communication protocol that computes the $n$-bit Equality function, to error $epsilon$. This improves upon the $log(n/epsilon^3)+O(1)$ upper bound implied by Newmans theorem, and matches the best known lower bound, which follows from Alon [Comb. Prob. Comput.09], up to an additive $loglog(1/epsilon)+O(1)$. In the quantum model, 1) we exhibit a one-way protocol of cost $log(n/epsilon)+4$, that uses only pure states and computes the $n$-bit Equality function to error $epsilon$. This bound was implicitly already shown by Nayak [PhD thesis99]. 2) We show that any $epsilon$-error one-way protocol for $n$-bit Equality that uses only pure states communicates at least $log(n/epsilon)-loglog(1/epsilon)-O(1)$ qubits. 3) We exhibit a one-way protocol of cost $log(sqrt{n}/epsilon)+3$, that uses mixed states and computes the $n$-bit Equality function to error $epsilon$. This is also tight up to an additive $loglog(1/epsilon)+O(1)$, which follows from Alons result. Our upper bounds also yield upper bounds on the approximate rank and related measures of the Identity matrix. This also implies improved upper bounds on these measures for the distributed SINK function, which was recently used to refute the randomized and quant
Variational Quantum Algorithms (VQAs) are a promising application for near-term quantum processors, however the quality of their results is greatly limited by noise. For this reason, various error mitigation techniques have emerged to deal with noise that can be applied to these algorithms. Recent work introduced a technique for mitigating expectation values against correlated measurement errors that can be applied to measurements of 10s of qubits. We apply these techniques to VQAs and demonstrate its effectiveness in improving estimates to the cost function. Moreover, we use the data resulting from this technique to experimentally characterize measurement errors in terms of the device connectivity on devices of up to 20 qubits. These results should be useful for better understanding the near-term potential of VQAs as well as understanding the correlations in measurement errors on large, near-term devices.