No Arabic abstract
Random access codes have provided many examples of quantum advantage in communication, but concern only one kind of information retrieval task. We introduce a related task - the Torpedo Game - and show that it admits greater quantum advantage than the comparable random access code. Perfect quantum strategies involving prepare-and-measure protocols with experimentally accessible three-level systems emerge via analysis in terms of the discrete Wigner function. The example is leveraged to an operational advantage in a pacifist version of the strategy game Battleship. We pinpoint a characteristic of quantum systems that enables quantum advantage in any bounded-memory information retrieval task. While preparation contextuality has previously been linked to advantages in random access coding we focus here on a different characteristic called sequential contextuality. It is shown not only to be necessary and sufficient for quantum advantage, but also to quantify the degree of advantage. Our perfect qutrit strategy for the Torpedo Game entails the strongest type of inconsistency with non-contextual hidden variables, revealing logical paradoxes with respect to those assumptions.
In (single-server) Private Information Retrieval (PIR), a server holds a large database $DB$ of size $n$, and a client holds an index $i in [n]$ and wishes to retrieve $DB[i]$ without revealing $i$ to the server. It is well known that information theoretic privacy even against an `honest but curious server requires $Omega(n)$ communication complexity. This is true even if quantum communication is allowed and is due to the ability of such an adversarial server to execute the protocol on a superposition of databases instead of on a specific database (`input purification attack). Nevertheless, there have been some proposals of protocols that achieve sub-linear communication and appear to provide some notion of privacy. Most notably, a protocol due to Le Gall (ToC 2012) with communication complexity $O(sqrt{n})$, and a protocol by Kerenidis et al. (QIC 2016) with communication complexity $O(log(n))$, and $O(n)$ shared entanglement. We show that, in a sense, input purification is the only potent adversarial strategy, and protocols such as the two protocols above are secure in a restricted variant of the quantum honest but curious (a.k.a specious) model. More explicitly, we propose a restricted privacy notion called emph{anchored privacy}, where the adversary is forced to execute on a classical database (i.e. the execution is anchored to a classical database). We show that for measurement-free protocols, anchored security against honest adversarial servers implies anchored privacy even against specious adversaries. Finally, we prove that even with (unlimited) pre-shared entanglement it is impossible to achieve security in the standard specious model with sub-linear communication, thus further substantiating the necessity of our relaxation. This lower bound may be of independent interest (in particular recalling that PIR is a special case of Fully Homomorphic Encryption).
We study the performance of classical and quantum machine learning (ML) models in predicting outcomes of physical experiments. The experiments depend on an input parameter $x$ and involve execution of a (possibly unknown) quantum process $mathcal{E}$. Our figure of merit is the number of runs of $mathcal{E}$ required to achieve a desired prediction performance. We consider classical ML models that perform a measurement and record the classical outcome after each run of $mathcal{E}$, and quantum ML models that can access $mathcal{E}$ coherently to acquire quantum data; the classical or quantum data is then used to predict outcomes of future experiments. We prove that for any input distribution $mathcal{D}(x)$, a classical ML model can provide accurate predictions on average by accessing $mathcal{E}$ a number of times comparable to the optimal quantum ML model. In contrast, for achieving accurate prediction on all inputs, we prove that exponential quantum advantage is possible. For example, to predict expectations of all Pauli observables in an $n$-qubit system $rho$, classical ML models require $2^{Omega(n)}$ copies of $rho$, but we present a quantum ML model using only $mathcal{O}(n)$ copies. Our results clarify where quantum advantage is possible and highlight the potential for classical ML models to address challenging quantum problems in physics and chemistry.
We show that postselection offers a nonclassical advantage in metrology. In every parameter-estimation experiment, the final measurement or the postprocessing incurs some cost. Postselection can improve the rate of Fisher information (the average information learned about an unknown parameter from an experimental trial) to cost. This improvement, we show, stems from the negativity of a quasiprobability distribution, a quantum extension of a probability distribution. In a classical theory, in which all observables commute, our quasiprobability distribution can be expressed as real and nonnegative. In a quantum-mechanically noncommuting theory, nonclassicality manifests in negative or nonreal quasiprobabilities. The distributions nonclassically negative values enable postselected experiments to outperform even postselection-free experiments whose input states and final measurements are optimized: Postselected quantum experiments can yield anomalously large information-cost rates. We prove that this advantage is genuinely nonclassical: no classically commuting theory can describe any quantum experiment that delivers an anomalously large Fisher information. Finally, we outline a preparation-and-postselection procedure that can yield an arbitrarily large Fisher information. Our results establish the nonclassicality of a metrological advantage, leveraging our quasiprobability distribution as a mathematical tool.
The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, and the attainable quantum advantage is modest. Here we solve an oracle-based problem, known as learning parity with noise, using a five-qubit superconducting processor. Running classical and quantum algorithms on the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a quantum advantage already emerges in existing noisy systems
Weak measurements may result in extra quantity of quantumness of correlations compared with standard projective measurement on a bipartite quantum state. We show that the quantumness of correlations by weak measurements can be consumed for information encoding which is only accessible by coherent quantum interactions. Then it can be considered as a resource for quantum information processing and can quantify this quantum advantage. We conclude that weak measurements can create more valuable quantum correlation.