ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine learning of high dimensional data on a noisy quantum processor

96   0   0.0 ( 0 )
 نشر من قبل Gabriel Perdue
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a quantum kernel method for high-dimensional data analysis using Googles universal quantum processor, Sycamore. This method is successfully applied to the cosmological benchmark of supernova classification using real spectral features with no dimensionality reduction and without vanishing kernel elements. Instead of using a synthetic dataset of low dimension or pre-processing the data with a classical machine learning algorithm to reduce the data dimension, this experiment demonstrates that machine learning with real, high dimensional data is possible using a quantum processor; but it requires careful attention to shot statistics and mean kernel element size when constructing a circuit ansatz. Our experiment utilizes 17 qubits to classify 67 dimensional data - significantly higher dimensionality than the largest prior quantum kernel experiments - resulting in classification accuracy that is competitive with noiseless simulation and comparable classical techniques.

قيم البحث

اقرأ أيضاً

Machine learning has emerged as a promising approach to study the properties of many-body systems. Recently proposed as a tool to classify phases of matter, the approach relies on classical simulation methods$-$such as Monte Carlo$-$which are known t o experience an exponential slowdown when simulating certain quantum systems. To overcome this slowdown while still leveraging machine learning, we propose a variational quantum algorithm which merges quantum simulation and quantum machine learning to classify phases of matter. Our classifier is directly fed labeled states recovered by the variational quantum eigensolver algorithm, thereby avoiding the data reading slowdown experienced in many applications of quantum enhanced machine learning. We propose families of variational ansatz states that are inspired directly by tensor networks. This allows us to use tools from tensor network theory to explain properties of the phase diagrams the presented method recovers. Finally, we propose a nearest-neighbour (checkerboard) quantum neural network. This majority vote quantum classifier is successfully trained to recognize phases of matter with $99%$ accuracy for the transverse field Ising model and $94%$ accuracy for the XXZ model. These findings suggest that our merger between quantum simulation and quantum enhanced machine learning offers a fertile ground to develop computational insights into quantum systems.
The successful implementation of algorithms on quantum processors relies on the accurate control of quantum bits (qubits) to perform logic gate operations. In this era of noisy intermediate-scale quantum (NISQ) computing, systematic miscalibrations, drift, and crosstalk in the control of qubits can lead to a coherent form of error which has no classical analog. Coherent errors severely limit the performance of quantum algorithms in an unpredictable manner, and mitigating their impact is necessary for realizing reliable quantum computations. Moreover, the average error rates measured by randomized benchmarking and related protocols are not sensitive to the full impact of coherent errors, and therefore do not reliably predict the global performance of quantum algorithms, leaving us unprepared to validate the accuracy of future large-scale quantum computations. Randomized compiling is a protocol designed to overcome these performance limitations by converting coherent errors into stochastic noise, dramatically reducing unpredictable errors in quantum algorithms and enabling accurate predictions of algorithmic performance from error rates measured via cycle benchmarking. In this work, we demonstrate significant performance gains under randomized compiling for the four-qubit quantum Fourier transform algorithm and for random circuits of variable depth on a superconducting quantum processor. Additionally, we accurately predict algorithm performance using experimentally-measured error rates. Our results demonstrate that randomized compiling can be utilized to leverage and predict the capabilities of modern-day noisy quantum processors, paving the way forward for scalable quantum computing.
The computation of molecular excitation energies is essential for predicting photo-induced reactions of chemical and technological interest. While the classical computing resources needed for this task scale poorly, quantum algorithms emerge as promi sing alternatives. In particular, the extension of the variational quantum eigensolver algorithm to the computation of the excitation energies is an attractive option. However, there is currently a lack of such algorithms for correlated molecular systems that is amenable to near-term, noisy hardware. In this work, we propose an extension of the well-established classical equation of motion approach to a quantum algorithm for the calculation of molecular excitation energies on noisy quantum computers. In particular, we demonstrate the efficiency of this approach in the calculation of the excitation energies of the LiH molecule on an IBM Quantum computer.
Topological data analysis offers a robust way to extract useful information from noisy, unstructured data by identifying its underlying structure. Recently, an efficient quantum algorithm was proposed [Lloyd, Garnerone, Zanardi, Nat. Commun. 7, 10138 (2016)] for calculating Betti numbers of data points -- topological features that count the number of topological holes of various dimensions in a scatterplot. Here, we implement a proof-of-principle demonstration of this quantum algorithm by employing a six-photon quantum processor to successfully analyze the topological features of Betti numbers of a network including three data points, providing new insights into data analysis in the era of quantum computing.
Quantum computation, a completely different paradigm of computing, benefits from theoretically proven speed-ups for certain problems and opens up the possibility of exactly studying the properties of quantum systems. Yet, because of the inherent frag ile nature of the physical computing elements, qubits, achieving quantum advantages over classical computation requires extremely low error rates for qubit operations as well as a significant overhead of physical qubits, in order to realize fault-tolerance via quantum error correction. However, recent theoretical work has shown that the accuracy of computation based off expectation values of quantum observables can be enhanced through an extrapolation of results from a collection of varying noisy experiments. Here, we demonstrate this error mitigation protocol on a superconducting quantum processor, enhancing its computational capability, with no additional hardware modifications. We apply the protocol to mitigate errors on canonical single- and two-qubit experiments and then extend its application to the variational optimization of Hamiltonians for quantum chemistry and magnetism. We effectively demonstrate that the suppression of incoherent errors helps unearth otherwise inaccessible accuracies to the variational solutions using our noisy processor. These results demonstrate that error mitigation techniques will be critical to significantly enhance the capabilities of near-term quantum computing hardware.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا