ترغب بنشر مسار تعليمي؟ اضغط هنا

Robust quantum classifier with minimal overhead

237   0   0.0 ( 0 )
 نشر من قبل Daniel Kyungdeock Park
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

To witness quantum advantages in practical settings, substantial efforts are required not only at the hardware level but also on theoretical research to reduce the computational cost of a given protocol. Quantum computation has the potential to significantly enhance existing classical machine learning methods, and several quantum algorithms for binary classification based on the kernel method have been proposed. These algorithms rely on estimating an expectation value, which in turn requires an expensive quantum data encoding procedure to be repeated many times. In this work, we calculate explicitly the number of repetition necessary for acquiring a fixed success probability and show that the Hadamard-test and the swap-test circuits achieve the optimal variance in terms of the quantum circuit parameters. The variance, and hence the number of repetition, can be further reduced only via optimization over data-related parameters. We also show that the kernel-based binary classification can be performed with a single-qubit measurement regardless of the number and the dimension of the data. Finally, we show that for a number of relevant noise models the classification can be performed reliably without quantum error correction. Our findings are useful for designing quantum classification experiments under limited resources, which is the common challenge in the noisy intermediate-scale quantum era.


قيم البحث

اقرأ أيضاً

Kernel methods have a wide spectrum of applications in machine learning. Recently, a link between quantum computing and kernel theory has been formally established, opening up opportunities for quantum techniques to enhance various existing machine l earning methods. We present a distance-based quantum classifier whose kernel is based on the quantum state fidelity between training and test data. The quantum kernel can be tailored systematically with a quantum circuit to raise the kernel to an arbitrary power and to assign arbitrary weights to each training data. Given a specific input state, our protocol calculates the weighted power sum of fidelities of quantum data in quantum parallel via a swap-test circuit followed by two single-qubit measurements, requiring only a constant number of repetitions regardless of the number of data. We also show that our classifier is equivalent to measuring the expectation value of a Helstrom operator, from which the well-known optimal quantum state discrimination can be derived. We demonstrate the proof-of-principle via classical simulations with a realistic noise model and experiments using the IBM quantum computer.
When calculating the overhead of a quantum algorithm made fault-tolerant using the surface code, many previous works have used defects and braids for logical qubit storage and state distillation. In this work, we show that lattice surgery reduces the storage overhead by over a factor of 4, and the distillation overhead by nearly a factor of 5, making it possible to run algorithms with $10^8$ T gates using only $3.7times 10^5$ physical qubits capable of executing gates with error $psim 10^{-3}$. These numbers strongly suggest that defects and braids in the surface code should be deprecated in favor of lattice surgery.
Quantum error mitigation techniques can reduce noise on current quantum hardware without the need for fault-tolerant quantum error correction. For instance, the quasiprobability method simulates a noise-free quantum computer using a noisy one, with t he caveat of only producing the correct expected values of observables. The cost of this error mitigation technique manifests as a sampling overhead which scales exponentially in the number of corrected gates. In this work, we present two novel approaches to reduce the exponential basis of that overhead. First, we introduce a robust quasiprobability method that allows for a tradeoff between an approximation error and the sampling overhead via semidefinite programming. Second, we derive a new algorithm based on mathematical optimization that aims to choose the quasiprobability decomposition in a noise-aware manner. Both techniques lead to a significantly lower overhead compared to existing approaches.
Fault-tolerant quantum error correction is essential for implementing quantum algorithms of significant practical importance. In this work, we propose a highly effective use of the surface-GKP code, i.e., the surface code consisting of bosonic GKP qu bits instead of bare two-dimensional qubits. In our proposal, we use error-corrected two-qubit gates between GKP qubits and introduce a maximum likelihood decoding strategy for correcting shift errors in the two-GKP-qubit gates. Our proposed decoding reduces the total CNOT failure rate of the GKP qubits, e.g., from $0.87%$ to $0.36%$ at a GKP squeezing of $12$dB, compared to the case where the simple closest-integer decoding is used. Then, by concatenating the GKP code with the surface code, we find that the threshold GKP squeezing is given by $9.9$dB under the the assumption that finite-squeezing of the GKP states is the dominant noise source. More importantly, we show that a low logical failure rate $p_{L} < 10^{-7}$ can be achieved with moderate hardware requirements, e.g., $291$ modes and $97$ qubits at a GKP squeezing of $12$dB as opposed to $1457$ bare qubits for the standard rotated surface code at an equivalent noise level (i.e., $p=0.36%$). Such a low failure rate of our surface-GKP code is possible through the use of space-time correlated edges in the matching graphs of the surface code decoder. Further, all edge weights in the matching graphs are computed dynamically based on analog information from the GKP error correction using the full history of all syndrome measurement rounds. We also show that a highly-squeezed GKP state of GKP squeezing $gtrsim 12$dB can be experimentally realized by using a dissipative stabilization method, namely, the Big-small-Big method, with fairly conservative experimental parameters. Lastly, we introduce a three-level ancilla scheme to mitigate ancilla decay errors during a GKP state preparation.
Buhrman, Cleve and Wigderson (STOC98) observed that for every Boolean function $f : {-1, 1}^n to {-1, 1}$ and $bullet : {-1, 1}^2 to {-1, 1}$ the two-party bounded-error quantum communication complexity of $(f circ bullet)$ is $O(Q(f) log n)$, where $Q(f)$ is the bounded-error quantum query complexity of $f$. Note that the bounded-error randomized communication complexity of $(f circ bullet)$ is bounded by $O(R(f))$, where $R(f)$ denotes the bounded-error randomized query complexity of $f$. Thus, the BCW simulation has an extra $O(log n)$ factor appearing that is absent in classical simulation. A natural question is if this factor can be avoided. H{o}yer and de Wolf (STACS02) showed that for the Set-Disjointness function, this can be reduced to $c^{log^* n}$ for some constant $c$, and subsequently Aaronson and Ambainis (FOCS03) showed that this factor can be made a constant. That is, the quantum communication complexity of the Set-Disjointness function (which is $mathsf{NOR}_n circ wedge$) is $O(Q(mathsf{NOR}_n))$. Perhaps somewhat surprisingly, we show that when $ bullet = oplus$, then the extra $log n$ factor in the BCW simulation is unavoidable. In other words, we exhibit a total function $F : {-1, 1}^n to {-1, 1}$ such that $Q^{cc}(F circ oplus) = Theta(Q(F) log n)$. To the best of our knowledge, it was not even known prior to this work whether there existed a total function $F$ and 2-bit function $bullet$, such that $Q^{cc}(F circ bullet) = omega(Q(F))$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا