Do you want to publish a course? Click here

Quasiprobability decompositions with reduced sampling overhead

109   0   0.0 ( 0 )
 Added by Christophe Piveteau
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Quantum error mitigation techniques can reduce noise on current quantum hardware without the need for fault-tolerant quantum error correction. For instance, the quasiprobability method simulates a noise-free quantum computer using a noisy one, with the caveat of only producing the correct expected values of observables. The cost of this error mitigation technique manifests as a sampling overhead which scales exponentially in the number of corrected gates. In this work, we present two novel approaches to reduce the exponential basis of that overhead. First, we introduce a robust quasiprobability method that allows for a tradeoff between an approximation error and the sampling overhead via semidefinite programming. Second, we derive a new algorithm based on mathematical optimization that aims to choose the quasiprobability decomposition in a noise-aware manner. Both techniques lead to a significantly lower overhead compared to existing approaches.



rate research

Read More

To witness quantum advantages in practical settings, substantial efforts are required not only at the hardware level but also on theoretical research to reduce the computational cost of a given protocol. Quantum computation has the potential to significantly enhance existing classical machine learning methods, and several quantum algorithms for binary classification based on the kernel method have been proposed. These algorithms rely on estimating an expectation value, which in turn requires an expensive quantum data encoding procedure to be repeated many times. In this work, we calculate explicitly the number of repetition necessary for acquiring a fixed success probability and show that the Hadamard-test and the swap-test circuits achieve the optimal variance in terms of the quantum circuit parameters. The variance, and hence the number of repetition, can be further reduced only via optimization over data-related parameters. We also show that the kernel-based binary classification can be performed with a single-qubit measurement regardless of the number and the dimension of the data. Finally, we show that for a number of relevant noise models the classification can be performed reliably without quantum error correction. Our findings are useful for designing quantum classification experiments under limited resources, which is the common challenge in the noisy intermediate-scale quantum era.
We present analytic expressions for the $s$-parametrized currents on the sphere for both unitary and dissipative evolutions. We examine the spatial distribution of the flow generated by these currents for quadratic Hamiltonians. The results are applied for the study of the quantum dissipative dynamics of the time-honored Kerr and Lipkin models, exploring the appearance of the semiclassical limit in stable, unstable and tunnelling regimes.
95 - G.J.Milburn 2006
We present a method for computing the action of conditional linear optical transformations, conditioned on photon counting, for arbitrary signal states. The method is based on the Q-function, a quasi probability distribution for anti normally ordered moments. We treat an arbitrary number of signal and ancilla modes. The ancilla modes are prepared in an arbitrary product number state. We construct the conditional, non unitary, signal transformations for an arbitrary photon number count on each of the ancilla modes.
118 - T. Kiesel , W. Vogel , M. Bellini 2011
We report the experimental reconstruction of a nonclassicality quasiprobability for a single-photon added thermal state. This quantity has significant negativities, which is necessary and sufficient for the nonclassicality of the quantum state. Our method presents several advantages compared to the reconstruction of the P function, since the nonclassicality filters used in this case can regularize the quasiprobabilities as well as their statistical uncertainties. A-priori assumptions about the quantum state are therefore not necessary. We also demonstrate that, in principle, our method is not limited by small quantum efficiencies.
82 - J. Sperling , W. Vogel 2019
We study the quasiprobability representation of quantum light, as introduced by Glauber and Sudarshan, for the unified characterization of quantum phenomena. We begin with reviewing the past and current impact of this technique. Regularization and convolution methods are specifically considered since they are accessible in experiments. We further discuss more general quantum systems for which the concept of negative probabilities can be generalized, being highly relevant for quantum information science. For analyzing quantum superpositions, we apply recently developed approaches to visualize quantum coherence of states via negative quasiprobability representations, including regularized quasiprobabilities for light and more general quantum correlated systems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا