ﻻ يوجد ملخص باللغة العربية
Binary classification is a fundamental problem in machine learning. Recent development of quantum similarity-based binary classifiers and kernel method that exploit quantum interference and feature quantum Hilbert space opened up tremendous opportunities for quantum-enhanced machine learning. To lay the fundamental ground for its further advancement, this work extends the general theory of quantum kernel-based classifiers. Existing quantum kernel-based classifiers are compared and the connection among them is analyzed. Focusing on the squared overlap between quantum states as a similarity measure, the essential and minimal ingredients for the quantum binary classification are examined. The classifier is also extended concerning various aspects, such as data type, measurement, and ensemble learning. The validity of the Hilbert-Schmidt inner product, which becomes the squared overlap for pure states, as a positive definite and symmetric kernel is explicitly shown, thereby connecting the quantum binary classifier and kernel methods.
Kernel methods have a wide spectrum of applications in machine learning. Recently, a link between quantum computing and kernel theory has been formally established, opening up opportunities for quantum techniques to enhance various existing machine l
A method for analyzing the feature map for the kernel-based quantum classifier is developed; that is, we give a general formula for computing a lower bound of the exact training accuracy, which helps us to see whether the selected feature map is suit
Machine Learning (ML) helps us to recognize patterns from raw data. ML is used in numerous domains i.e. biomedical, agricultural, food technology, etc. Despite recent technological advancements, there is still room for substantial improvement in pred
One key step in performing quantum machine learning (QML) on noisy intermediate-scale quantum (NISQ) devices is the dimension reduction of the input data prior to their encoding. Traditional principle component analysis (PCA) and neural networks have
We combine K-means clustering with the least-squares kernel classification method. K-means clustering is used to extract a set of representative vectors for each class. The least-squares kernel method uses these representative vectors as a training s