ترغب بنشر مسار تعليمي؟ اضغط هنا

Trading classical and quantum computational resources

110   0   0.0 ( 0 )
 نشر من قبل Sergey Bravyi
 تاريخ النشر 2015
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose examples of a hybrid quantum-classical simulation where a classical computer assisted by a small quantum processor can efficiently simulate a larger quantum system. First we consider sparse quantum circuits such that each qubit participates in O(1) two-qubit gates. It is shown that any sparse circuit on n+k qubits can be simulated by sparse circuits on n qubits and a classical processing that takes time $2^{O(k)} poly(n)$. Secondly, we study Pauli-based computation (PBC) where allowed operations are non-destructive eigenvalue measurements of n-qubit Pauli operators. The computation begins by initializing each qubit in the so-called magic state. This model is known to be equivalent to the universal quantum computer. We show that any PBC on n+k qubits can be simulated by PBCs on n qubits and a classical processing that takes time $2^{O(k)} poly(n)$. Finally, we propose a purely classical algorithm that can simulate a PBC on n qubits in a time $2^{c n} poly(n)$ where $capprox 0.94$. This improves upon the brute-force simulation method which takes time $2^n poly(n)$. Our algorithm exploits the fact that n-fold tensor products of magic states admit a low-rank decomposition into n-qubit stabilizer states.

قيم البحث

اقرأ أيضاً

We introduce and analyse the problem of encoding classical information into different resources of a quantum state. More precisely, we consider a general class of communication scenarios characterised by encoding operations that commute with a unique resource destroying map and leave free states invariant. Our motivating example is given by encoding information into coherences of a quantum system with respect to a fixed basis (with unitaries diagonal in that basis as encodings and the decoherence channel as a resource destroying map), but the generality of the framework allows us to explore applications ranging from super-dense coding to thermodynamics. For any state, we find that the number of messages that can be encoded into it using such operations in a one-shot scenario is upper-bounded in terms of the information spectrum relative entropy between the given state and its version with erased resources. Furthermore, if the resource destroying map is a twirling channel over some unitary group, we find matching one-shot lower-bounds as well. In the asymptotic setting where we encode into many copies of the resource state, our bounds yield an operational interpretation of resource monotones such as the relative entropy of coherence and its corresponding relative entropy variance.
We study the classical simulation complexity in both the weak and strong senses, of matchgate (MG) computations supplemented with all combinations of settings involving inclusion of intermediate adaptive or nonadaptive computational basis measurement s, product state or magic and general entangled state inputs, and single- or multi-line outputs. We find a striking parallel to known results for Clifford circuits, after some rebranding of resources. We also give bounds on the amount of classical simulation effort required in case of limited access intermediate measurements and entangled inputs. In further settings we show that adaptive MG circuits remain classically efficiently simulable if arbitrary two-qubit entangled input states on consecutive lines are allowed, but become quantum universal for three or more lines. And if adaptive measurements in non-computational bases are allowed, even with just computational basis inputs, we get quantum universal power again.
81 - Scott M. Cohen 2019
Given a protocol ${cal P}$ that implements multipartite quantum channel ${cal E}$ by repeated rounds of local operations and classical communication (LOCC), we construct an alternate LOCC protocol for ${cal E}$ in no more rounds than ${cal P}$ and no more than a fixed, constant number of outcomes for each local measurement, the same constant number for every party and every round. We then obtain another upper bound on the number of outcomes that, under certain conditions, improves on the first. The latter bound shows that for LOCC channels that are extreme points of the convex set of all quantum channels, the parties can restrict the number of outcomes in their individual local measurements to no more than the square of their local Hilbert space dimension, $d_alpha$, suggesting a possible link between the required resources for LOCC and the convex structure of the set of all quantum channels. Our bounds on the number of outcomes indicating the need for only constant resources per round, independent of the number of rounds $r$ including when that number is infinite, are a stark contrast to the exponential $r$-dependence in the only previously published bound of which we are aware. If a lower bound is known on the number of product operators needed to represent the channel, we obtain a lower bound on the number of rounds required to implement the given channel by LOCC. Finally, we show that when the quantum channel is not required but only that a given task be implemented deterministically, then no more than $d_alpha^2$ outcomes are needed for each local measurement by party $alpha$.
Quantum metrology offers a quadratic advantage over classical approaches to parameter estimation problems by utilizing entanglement and nonclassicality. However, the hurdle of actually implementing the necessary quantum probe states and measurements, which vary drastically for different metrological scenarios, is usually not taken into account. We show that for a wide range of tasks in metrology, 2D cluster states (a particular family of states useful for measurement-based quantum computation) can serve as flexible resources that allow one to efficiently prepare any required state for sensing, and perform appropriate (entangled) measurements using only single qubit operations. Crucially, the overhead in the number of qubits is less than quadratic, thus preserving the quantum scaling advantage. This is ensured by using a compression to a logarithmically sized space that contains all relevant information for sensing. We specifically demonstrate how our method can be used to obtain optimal scaling for phase and frequency estimation in local estimation problems, as well as for the Bayesian equivalents with Gaussian priors of varying widths. Furthermore, we show that in the paradigmatic case of local phase estimation 1D cluster states are sufficient for optimal state preparation and measurement.
116 - Bartosz Regula 2021
The difficulty in manipulating quantum resources deterministically often necessitates the use of probabilistic protocols, but the characterization of their capabilities and limitations has been lacking. Here, we develop two general approaches to this problem. First, we introduce a new resource monotone based on the Hilbert projective metric and we show that it obeys a very strong type of monotonicity: it can rule out all transformations, probabilistic or deterministic, between states in any quantum resource theory. This allows us to place fundamental limitations on state transformations and restrict the advantages that probabilistic protocols can provide over deterministic ones, significantly strengthening previous findings and extending recent no-go theorems. We apply our results to obtain a substantial improvement in lower bounds for the errors and overheads of probabilistic distillation protocols, directly applicable to tasks such as entanglement or magic state distillation, and computable through convex optimization. In broad classes of resources, we show that no better restrictions on probabilistic protocols are possible -- our monotone can provide a necessary and sufficient condition for probabilistic resource transformations, thus allowing us to quantify exactly the highest fidelity achievable in resource distillation tasks by means of any probabilistic manipulation protocol. Complementing this approach, we introduce a general method for bounding achievable probabilities in resource transformations through a family of convex optimization problems. We show it to tightly characterize single-shot probabilistic distillation in broad types of resource theories, allowing an exact analysis of the trade-offs between the probabilities and errors in distilling maximally resourceful states.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا