No Arabic abstract
A proof of quantumness is a type of challenge-response protocol in which a classical verifier can efficiently certify the quantum advantage of an untrusted prover. That is, a quantum prover can correctly answer the verifiers challenges and be accepted, while any polynomial-time classical prover will be rejected with high probability, based on plausible computational assumptions. To answer the verifiers challenges, existing proofs of quantumness typically require the quantum prover to perform a combination of polynomial-size quantum circuits and measurements. In this paper, we show that all existing proofs of quantumness can be modified so that the prover need only perform constant-depth quantum circuits (and measurements) together with log-depth classical computation. We thus obtain depth-efficient proofs of quantumness whose soundness can be based on the same assumptions as existing schemes (factoring, discrete logarithm, learning with errors or ring learning with errors).
A proof of quantumness is a method for provably demonstrating (to a classical verifier) that a quantum device can perform computational tasks that a classical device with comparable resources cannot. Providing a proof of quantumness is the first step towards constructing a useful quantum computer. There are currently three approaches for exhibiting proofs of quantumness: (i) Inverting a classically-hard one-way function (e.g. using Shors algorithm). This seems technologically out of reach. (ii) Sampling from a classically-hard-to-sample distribution (e.g. BosonSampling). This may be within reach of near-term experiments, but for all such tasks known verification requires exponential time. (iii) Interactive protocols based on cryptographic assumptions. The use of a trapdoor scheme allows for efficient verification, and implementation seems to require much less resources than (i), yet still more than (ii). In this work we propose a significant simplification to approach (iii) by employing the random oracle heuristic. (We note that we do not apply the Fiat-Shamir paradigm.) We give a two-message (challenge-response) proof of quantumness based on any trapdoor claw-free function. In contrast to earlier proposals we do not need an adaptive hard-core bit property. This allows the use of smaller security parameters and more diverse computational assumptions (such as Ring Learning with Errors), significantly reducing the quantum computational effort required for a successful demonstration.
The reliability of quantum channels for transmitting information is of profound importance from the perspective of quantum information. This naturally leads to the question as how well a quantum state is preserved when subjected to a quantum channel. We propose a measure of quantumness of channels based on non-commutativity of quantum states that is intuitive and easy to compute. We apply the proposed measure to some well known noise channels, both Markovian as well as non-Markovian and find that the results are in good agreement with those from a recently introduced $l_1$-norm coherence based measure.
We propose a new measure of relative incompatibility for a quantum system with respect to two non-commuting observables, and call it quantumness of relative incompatibility. In case of a classical state, order of observation is inconsequential, hence probability distribution of outcomes of any observable remains undisturbed. We define relative entropy of the two marginal probability distributions as a measure of quantumness in the state, which is revealed only in presence of two non-commuting observables. Like all other measures, we show that the proposed measure satisfies some basic axioms. Also, we find that this measure depicts complementarity with quantum coherence. The relation is more vivid when we choose one of the observables in such a way that its eigen basis matches with the basis in which the coherence is measured. Our result indicates that the quantumness in a single system is still an interesting question to explore and there can be an inherent feature of the state which manifests beyond the idea of quantum coherence.
Quantum mechanics marks a radical departure from the classical understanding of Nature, fostering an inherent randomness which forbids a deterministic description; yet the most fundamental departure arises from something different. As shown by Bell [1] and Kochen-Specker [2], quantum mechanics portrays a picture of the world in which reality loses its objectivity and is in fact created by observation. Quantum mechanics predicts phenomena which cannot be explained by any theory with objective realism, although our everyday experience supports the hypothesis that macroscopic objects, despite being made of quantum particles, exist independently of the act of observation; in this paper we identify this behavior as classical. Here we show that this seemingly obvious classical behavior of the macroscopic world cannot be experimentally tested and belongs to the realm of ontology similar to the dispute on the interpretations of quantum mechanics [3,4]. For small systems such as a single photon [5] or a pair [6], it has been experimentally proven that a classical description cannot be sustained. Recently, there have also been experiments that claim to have demonstrated quantum behavior of relatively large objects such as interference of fullerenes [7], the violation of Leggett-Garg inequality in Josephson junction [8], and interference between two condensed clouds of atoms [9], which suggest that there is no limit to the size of the system on which the quantum-versus-classical question can be tested. These behaviors, however, are not sufficient to refute classical description in the sense of objective reality. Our findings show that once we reach the regime where an Avogadro number of particles is present, the quantum-versus-classical question cannot be answered experimentally.
Randomness plays a central rol in the quantum mechanical description of our interactions. We review the relationship between the violation of Bell inequalities, non signaling and randomness. We discuss the challenge in defining a random string, and show that algorithmic information theory provides a necessary condition for randomness using Borel normality. We close with a view on incomputablity and its implications in physics.