ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum Machine Learning with SQUID

65   0   0.0 ( 0 )
 نشر من قبل Jakub Filipek
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

In this work we present the Scaled QUantum IDentifier (SQUID), an open-source framework for exploring hybrid Quantum-Classical algorithms for classification problems. The classical infrastructure is based on PyTorch and we provide a standardized design to implement a variety of quantum models with the capability of back-propagation for efficient training. We present the structure of our framework and provide examples of using SQUID in a standard binary classification problem from the popular MNIST dataset. In particular, we highlight the implications for scalability for gradient-based optimization of quantum models on the choice of output for variational quantum models.



قيم البحث

اقرأ أيضاً

Quantum machine learning (QML) can complement the growing trend of using learned models for a myriad of classification tasks, from image recognition to natural speech processing. A quantum advantage arises due to the intractability of quantum operati ons on a classical computer. Many datasets used in machine learning are crowd sourced or contain some private information. To the best of our knowledge, no current QML models are equipped with privacy-preserving features, which raises concerns as it is paramount that models do not expose sensitive information. Thus, privacy-preserving algorithms need to be implemented with QML. One solution is to make the machine learning algorithm differentially private, meaning the effect of a single data point on the training dataset is minimized. Differentially private machine learning models have been investigated, but differential privacy has yet to be studied in the context of QML. In this study, we develop a hybrid quantum-classical model that is trained to preserve privacy using differentially private optimization algorithm. This marks the first proof-of-principle demonstration of privacy-preserving QML. The experiments demonstrate that differentially private QML can protect user-sensitive information without diminishing model accuracy. Although the quantum model is simulated and tested on a classical computer, it demonstrates potential to be efficiently implemented on near-term quantum devices (noisy intermediate-scale quantum [NISQ]). The approachs success is illustrated via the classification of spatially classed two-dimensional datasets and a binary MNIST classification. This implementation of privacy-preserving QML will ensure confidentiality and accurate learning on NISQ technology.
Fuelled by increasing computer power and algorithmic advances, machine learning techniques have become powerful tools for finding patterns in data. Since quantum systems produce counter-intuitive patterns believed not to be efficiently produced by cl assical systems, it is reasonable to postulate that quantum computers may outperform classical computers on machine learning tasks. The field of quantum machine learning explores how to devise and implement concrete quantum software that offers such advantages. Recent work has made clear that the hardware and software challenges are still considerable but has also opened paths towards solutions.
Distributed training across several quantum computers could significantly improve the training time and if we could share the learned model, not the data, it could potentially improve the data privacy as the training would happen where the data is lo cated. However, to the best of our knowledge, no work has been done in quantum machine learning (QML) in federation setting yet. In this work, we present the federated training on hybrid quantum-classical machine learning models although our framework could be generalized to pure quantum machine learning model. Specifically, we consider the quantum neural network (QNN) coupled with classical pre-trained convolutional model. Our distributed federated learning scheme demonstrated almost the same level of trained model accuracies and yet significantly faster distributed training. It demonstrates a promising future research direction for scaling and privacy aspects.
Machine learning has emerged as a promising approach to study the properties of many-body systems. Recently proposed as a tool to classify phases of matter, the approach relies on classical simulation methods$-$such as Monte Carlo$-$which are known t o experience an exponential slowdown when simulating certain quantum systems. To overcome this slowdown while still leveraging machine learning, we propose a variational quantum algorithm which merges quantum simulation and quantum machine learning to classify phases of matter. Our classifier is directly fed labeled states recovered by the variational quantum eigensolver algorithm, thereby avoiding the data reading slowdown experienced in many applications of quantum enhanced machine learning. We propose families of variational ansatz states that are inspired directly by tensor networks. This allows us to use tools from tensor network theory to explain properties of the phase diagrams the presented method recovers. Finally, we propose a nearest-neighbour (checkerboard) quantum neural network. This majority vote quantum classifier is successfully trained to recognize phases of matter with $99%$ accuracy for the transverse field Ising model and $94%$ accuracy for the XXZ model. These findings suggest that our merger between quantum simulation and quantum enhanced machine learning offers a fertile ground to develop computational insights into quantum systems.
Quantum computers have the opportunity to be transformative for a variety of computational tasks. Recently, there have been proposals to use the unsimulatably of large quantum devices to perform regression, classification, and other machine learning tasks with quantum advantage by using kernel methods. While unsimulatably is a necessary condition for quantum advantage in machine learning, it is not sufficient, as not all kernels are equally effective. Here, we study the use of quantum computers to perform the machine learning tasks of one- and multi-dimensional regression, as well as reinforcement learning, using Gaussian Processes. By using approximations of performant classical kernels enhanced with extra quantum resources, we demonstrate that quantum devices, both in simulation and on hardware, can perform machine learning tasks at least as well as, and many times better than, the classical inspiration. Our informed kernel design demonstrates a path towards effectively utilizing quantum devices for machine learning tasks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا