ترغب بنشر مسار تعليمي؟ اضغط هنا

When Machine Learning Meets Quantum Computers: A Case Study

465   0   0.0 ( 0 )
 نشر من قبل Weiwen Jiang
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Along with the development of AI democratization, the machine learning approach, in particular neural networks, has been applied to wide-range applications. In different application scenarios, the neural network will be accelerated on the tailored computing platform. The acceleration of neural networks on classical computing platforms, such as CPU, GPU, FPGA, ASIC, has been widely studied; however, when the scale of the application consistently grows up, the memory bottleneck becomes obvious, widely known as memory-wall. In response to such a challenge, advanced quantum computing, which can represent 2^N states with N quantum bits (qubits), is regarded as a promising solution. It is imminent to know how to design the quantum circuit for accelerating neural networks. Most recently, there are initial works studying how to map neural networks to actual quantum processors. To better understand the state-of-the-art design and inspire new design methodology, this paper carries out a case study to demonstrate an end-to-end implementation. On the neural network side, we employ the multilayer perceptron to complete image classification tasks using the standard and widely used MNIST dataset. On the quantum computing side, we target IBM Quantum processors, which can be programmed and simulated by using IBM Qiskit. This work targets the acceleration of the inference phase of a trained neural network on the quantum processor. Along with the case study, we will demonstrate the typical procedure for mapping neural networks to quantum circuits.



قيم البحث

اقرأ أيضاً

Artificial intelligence (AI) powered wireless networks promise to revolutionize the conventional operation and structure of current networks from network design to infrastructure management, cost reduction, and user performance improvement. Empowerin g future networks with AI functionalities will enable a shift from reactive/incident driven operations to proactive/data-driven operations. This paper provides an overview on the integration of AI functionalities in 5G and beyond networks. Key factors for successful AI integration such as data, security, and explainable AI are highlighted. We also summarize the various types of network intelligence as well as machine learning based air interface in future networks. Use case examples for the application of AI to the wireless domain are then summarized. We highlight on applications to the physical layer, mobility management, wireless security, and localization.
Wireless systems are vulnerable to various attacks such as jamming and eavesdropping due to the shared and broadcast nature of wireless medium. To support both attack and defense strategies, machine learning (ML) provides automated means to learn fro m and adapt to wireless communication characteristics that are hard to capture by hand-crafted features and models. This article discusses motivation, background, and scope of research efforts that bridge ML and wireless security. Motivated by research directions surveyed in the context of ML for wireless security, ML-based attack and defense solutions and emerging adversarial ML techniques in the wireless domain are identified along with a roadmap to foster research efforts in bridging ML and wireless security.
Even the most sophisticated artificial neural networks are built by aggregating substantially identical units called neurons. A neuron receives multiple signals, internally combines them, and applies a non-linear function to the resulting weighted su m. Several attempts to generalize neurons to the quantum regime have been proposed, but all proposals collided with the difficulty of implementing non-linear activation functions, which is essential for classical neurons, due to the linear nature of quantum mechanics. Here we propose a solution to this roadblock in the form of a small quantum circuit that naturally simulates neurons with threshold activation. Our quantum circuit defines a building block, the quantum neuron, that can reproduce a variety of classical neural network constructions while maintaining the ability to process superpositions of inputs and preserve quantum coherence and entanglement. In the construction of feedforward networks of quantum neurons, we provide numerical evidence that the network not only can learn a function when trained with superposition of inputs and the corresponding output, but that this training suffices to learn the function on all individual inputs separately. When arranged to mimic Hopfield networks, quantum neural networks exhibit properties of associative memory. Patterns are encoded using the simple Hebbian rule for the weights and we demonstrate attractor dynamics from corrupted inputs. Finally, the fact that our quantum model closely captures (traditional) neural network dynamics implies that the vast body of literature and results on neural networks becomes directly relevant in the context of quantum machine learning.
With quantum computing technologies nearing the era of commercialization and quantum supremacy, machine learning (ML) appears as one of the promising killer applications. Despite significant effort, there has been a disconnect between most quantum ML proposals, the needs of ML practitioners, and the capabilities of near-term quantum devices to demonstrate quantum enhancement in the near future. In this contribution to the focus collection on What would you do with 1000 qubits?, we provide concrete examples of intractable ML tasks that could be enhanced with near-term devices. We argue that to reach this target, the focus should be on areas where ML researchers are struggling, such as generative models in unsupervised and semi-supervised learning, instead of the popular and more tractable supervised learning techniques. We also highlight the case of classical datasets with potential quantum-like statistical correlations where quantum models could be more suitable. We focus on hybrid quantum-classical approaches and illustrate some of the key challenges we foresee for near-term implementations. Finally, we introduce the quantum-assisted Helmholtz machine (QAHM), an attempt to use near-term quantum devices to tackle high-dimensional datasets of continuous variables. Instead of using quantum computers to assist deep learning, as previous approaches do, the QAHM uses deep learning to extract a low-dimensional binary representation of data, suitable for relatively small quantum processors which can assist the training of an unsupervised generative model. Although we illustrate this concept on a quantum annealer, other quantum platforms could benefit as well from this hybrid quantum-classical framework.
138 - C. M. Wilson 2018
Noisy intermediate-scale quantum computing devices are an exciting platform for the exploration of the power of near-term quantum applications. Performing nontrivial tasks in such devices requires a fundamentally different approach than what would be used on an error-corrected quantum computer. One such approach is to use hybrid algorithms, where problems are reduced to a parameterized quantum circuit that is often optimized in a classical feedback loop. Here we describe one such hybrid algorithm for machine learning tasks by building upon the classical algorithm known as random kitchen sinks. Our technique, called quantum kitchen sinks, uses quantum circuits to nonlinearly transform classical inputs into features that can then be used in a number of machine learning algorithms. We demonstrate the power and flexibility of this proposal by using it to solve binary classification problems for synthetic datasets as well as handwritten digits from the MNIST database. Using the Rigetti quantum virtual machine, we show that small quantum circuits provide significant performance lift over standard linear classical algorithms, reducing classification error rates from 50% to $<0.1%$, and from $4.1%$ to $1.4%$ in these two examples, respectively. Further, we are able to run the MNIST classification problem, using full-sized MNIST images, on a Rigetti quantum processing unit, finding a modest performance lift over the linear baseline.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا