Do you want to publish a course? Click here

Hybrid quantum-classical unsupervised data clustering based on the Self-Organizing Feature Map

66   0   0.0 ( 0 )
 Added by Ilia Lazarev
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

Unsupervised machine learning is one of the main techniques employed in artificial intelligence. Quantum computers offer opportunities to speed up such machine learning techniques. Here, we introduce an algorithm for quantum assisted unsupervised data clustering using the self-organizing feature map, a type of artificial neural network. We make a proof-of-concept realization of one of the central components on the IBM Q Experience and show that it allows us to reduce the number of calculations in a number of clusters. We compare the results with the classical algorithm on a toy example of unsupervised text clustering.

rate research

Read More

103 - M. Andrecut 2021
We discuss a diffusion based implementation of the self-organizing map on the unit hypersphere. We show that this approach can be efficiently implemented using just linear algebra methods, we give a python numpy implementation, and we illustrate the approach using the well known MNIST dataset.
Anomaly detection plays a key role in industrial manufacturing for product quality control. Traditional methods for anomaly detection are rule-based with limited generalization ability. Recent methods based on supervised deep learning are more powerful but require large-scale annotated datasets for training. In practice, abnormal products are rare thus it is very difficult to train a deep model in a fully supervised way. In this paper, we propose a novel unsupervised anomaly detection approach based on Self-organizing Map (SOM). Our method, Self-organizing Map for Anomaly Detection (SOMAD) maintains normal characteristics by using topological memory based on multi-scale features. SOMAD achieves state-of the-art performance on unsupervised anomaly detection and localization on the MVTec dataset.
One key step in performing quantum machine learning (QML) on noisy intermediate-scale quantum (NISQ) devices is the dimension reduction of the input data prior to their encoding. Traditional principle component analysis (PCA) and neural networks have been used to perform this task; however, the classical and quantum layers are usually trained separately. A framework that allows for a better integration of the two key components is thus highly desirable. Here we introduce a hybrid model combining the quantum-inspired tensor networks (TN) and the variational quantum circuits (VQC) to perform supervised learning tasks, which allows for an end-to-end training. We show that a matrix product state based TN with low bond dimensions performs better than PCA as a feature extractor to compress data for the input of VQCs in the binary classification of MNIST dataset. The architecture is highly adaptable and can easily incorporate extra quantum resource when available.
A method for analyzing the feature map for the kernel-based quantum classifier is developed; that is, we give a general formula for computing a lower bound of the exact training accuracy, which helps us to see whether the selected feature map is suitable for linearly separating the dataset. We show a proof of concept demonstration of this method for a class of 2-qubit classifier, with several 2-dimensional dataset. Also, a synthesis method, that combines different kernels to construct a better-performing feature map in a lager feature space, is presented.
Machine learning techniques have led to broad adoption of a statistical model of computing. The statistical distributions natively available on quantum processors are a superset of those available classically. Harnessing this attribute has the potential to accelerate or otherwise improve machine learning relative to purely classical performance. A key challenge toward that goal is learning to hybridize classical computing resources and traditional learning techniques with the emerging capabilities of general purpose quantum processors. Here, we demonstrate such hybridization by training a 19-qubit gate model processor to solve a clustering problem, a foundational challenge in unsupervised learning. We use the quantum approximate optimization algorithm in conjunction with a gradient-free Bayesian optimization to train the quantum machine. This quantum/classical hybrid algorithm shows robustness to realistic noise, and we find evidence that classical optimization can be used to train around both coherent and incoherent imperfections.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا