ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum Perceptron Models

67   0   0.0 ( 0 )
 نشر من قبل Nathan Wiebe
 تاريخ النشر 2016
والبحث باللغة English




اسأل ChatGPT حول البحث

We demonstrate how quantum computation can provide non-trivial improvements in the computational and statistical complexity of the perceptron model. We develop two quantum algorithms for perceptron learning. The first algorithm exploits quantum information processing to determine a separating hyperplane using a number of steps sublinear in the number of data points $N$, namely $O(sqrt{N})$. The second algorithm illustrates how the classical mistake bound of $O(frac{1}{gamma^2})$ can be further improved to $O(frac{1}{sqrt{gamma}})$ through quantum means, where $gamma$ denotes the margin. Such improvements are achieved through the application of quantum amplitude amplification to the version space interpretation of the perceptron model.



قيم البحث

اقرأ أيضاً

Quantum machine learning algorithms could provide significant speed-ups over their classical counterparts; however, whether they could also achieve good generalization remains unclear. Recently, two quantum perceptron models which give a quadratic im provement over the classical perceptron algorithm using Grovers search have been proposed by Wiebe et al. arXiv:1602.04799 . While the first model reduces the complexity with respect to the size of the training set, the second one improves the bound on the number of mistakes made by the perceptron. In this paper, we introduce a hybrid quantum-classical perceptron algorithm with lower complexity and better generalization ability than the classical perceptron. We show a quadratic improvement over the classical perceptron in both the number of samples and the margin of the data. We derive a bound on the expected error of the hypothesis returned by our algorithm, which compares favorably to the one obtained with the classical online perceptron. We use numerical experiments to illustrate the trade-off between computational complexity and statistical accuracy in quantum perceptron learning and discuss some of the key practical issues surrounding the implementation of quantum perceptron models into near-term quantum devices, whose practical implementation represents a serious challenge due to inherent noise. However, the potential benefits make correcting this worthwhile.
Perceptrons, which perform binary classification, are the fundamental building blocks of neural networks. Given a data set of size~$N$ and margin~$gamma$ (how well the given data are separated), the query complexity of the best-known quantum training algorithm scales as either $( icefrac{sqrt{N}}{gamma^2})log( icefrac1{gamma^2)}$ or $ icefrac{N}{sqrt{gamma}}$, which is achieved by a hybrid of classical and quantum search. In this paper, we improve the version space quantum training method for perceptrons such that the query complexity of our algorithm scales as $sqrt{ icefrac{N}{gamma}}$. This is achieved by constructing an oracle for the perceptrons using quantum counting of the number of data elements that are correctly classified. We show that query complexity to construct such an oracle has a quadratic improvement over classical methods. Once such an oracle is constructed, bounded-error quantum search can be used to search over the hyperplane instances. The optimality of our algorithm is proven by reducing the evaluation of a two-level AND-OR tree (for which the query complexity lower bound is known) to a multi-criterion search. Our quantum training algorithm can be generalized to train more complex machine learning models such as neural networks, which are built on a large number of perceptrons.
We demonstrate that it is possible to implement a quantum perceptron with a sigmoid activation function as an efficient, reversible many-body unitary operation. When inserted in a neural network, the perceptrons response is parameterized by the poten tial exerted by other neurons. We prove that such a quantum neural network is a universal approximator of continuous functions, with at least the same power as classical neural networks. While engineering general perceptrons is a challenging control problem --also defined in this work--, the ubiquitous sigmoid-response neuron can be implemented as a quasi-adiabatic passage with an Ising model. In this construct, the scaling of resources is favorable with respect to the total network size and is dominated by the number of layers. We expect that our sigmoid perceptron will have applications also in quantum sensing or variational estimation of many-body Hamiltonians.
123 - Yue Ban , Xi Chen , E. Torrontegui 2020
The quantum perceptron is a fundamental building block for quantum machine learning. This is a multidisciplinary field that incorporates abilities of quantum computing, such as state superposition and entanglement, to classical machine learning schem es. Motivated by the techniques of shortcuts to adiabaticity, we propose a speed-up quantum perceptron where a control field on the perceptron is inversely engineered leading to a rapid nonlinear response with a sigmoid activation function. This results in faster overall perceptron performance compared to quasi-adiabatic protocols, as well as in enhanced robustness against imperfections in the controls.
Near-term quantum devices can be used to build quantum machine learning models, such as quantum kernel methods and quantum neural networks (QNN) to perform classification tasks. There have been many proposals how to use variational quantum circuits a s quantum perceptrons or as QNNs. The aim of this work is to systematically compare different QNN architectures and to evaluate their relative expressive power with a teacher-student scheme. Specifically, the teacher model generates the datasets mapping random inputs to outputs which then have to be learned by the student models. This way, we avoid training on arbitrary data sets and allow to compare the learning capacity of different models directly via the loss, the prediction map, the accuracy and the relative entropy between the prediction maps. We focus particularly on a quantum perceptron model inspired by the recent work of Tacchino et. al. cite{Tacchino1} and compare it to the data re-uploading scheme that was originally introduced by Perez-Salinas et. al. cite{data_re-uploading}. We discuss alterations of the perceptron model and the formation of deep QNN to better understand the role of hidden units and non-linearities in these architectures.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا