ترغب بنشر مسار تعليمي؟ اضغط هنا

An end-to-end trainable hybrid classical-quantum classifier

105   0   0.0 ( 0 )
 نشر من قبل Ying-Jer Kao
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce a hybrid model combining a quantum-inspired tensor network and a variational quantum circuit to perform supervised learning tasks. This architecture allows for the classical and quantum parts of the model to be trained simultaneously, providing an end-to-end training framework. We show that compared to the principal component analysis, a tensor network based on the matrix product state with low bond dimensions performs better as a feature extractor for the input data of the variational quantum circuit in the binary and ternary classification of MNIST and Fashion-MNIST datasets. The architecture is highly adaptable and the classical-quantum boundary can be adjusted according the availability of the quantum resource by exploiting the correspondence between tensor networks and quantum circuits.

قيم البحث

اقرأ أيضاً

One key step in performing quantum machine learning (QML) on noisy intermediate-scale quantum (NISQ) devices is the dimension reduction of the input data prior to their encoding. Traditional principle component analysis (PCA) and neural networks have been used to perform this task; however, the classical and quantum layers are usually trained separately. A framework that allows for a better integration of the two key components is thus highly desirable. Here we introduce a hybrid model combining the quantum-inspired tensor networks (TN) and the variational quantum circuits (VQC) to perform supervised learning tasks, which allows for an end-to-end training. We show that a matrix product state based TN with low bond dimensions performs better than PCA as a feature extractor to compress data for the input of VQCs in the binary classification of MNIST dataset. The architecture is highly adaptable and can easily incorporate extra quantum resource when available.
328 - Re-Bing Wu , Xi Cao , Pinchen Xie 2020
Toward quantum machine learning deployed on imperfect near-term intermediate-scale quantum (NISQ) processors, the entire physical implementation of should include as less as possible hand-designed modules with only a few ad-hoc parameters to be deter mined. This work presents such a hardware-friendly end-to-end quantum machine learning scheme that can be implemented with imperfect near-term intermediate-scale quantum (NISQ) processors. The proposal transforms the machine learning task to the optimization of controlled quantum dynamics, in which the learning model is parameterized by experimentally tunable control variables. Our design also enables automated feature selection by encoding the raw input to quantum states through agent control variables. Comparing with the gate-based parameterized quantum circuits, the proposed end-to-end quantum learning model is easy to implement as there are only few ad-hoc parameters to be determined. Numerical simulations on the benchmarking MNIST dataset demonstrate that the model can achieve high performance using only 3-5 qubits without downsizing the dataset, which shows great potential for accomplishing large-scale real-world learning tasks on NISQ processors.arning models. The scheme is promising for efficiently performing large-scale real-world learning tasks using intermediate-scale quantum processors.
Many real-world tasks involve identifying patterns from data satisfying background or prior knowledge. In domains like materials discovery, due to the flaws and biases in raw experimental data, the identification of X-ray diffraction patterns (XRD) o ften requires a huge amount of manual work in finding refined phases that are similar to the ideal theoretical ones. Automatically refining the raw XRDs utilizing the simulated theoretical data is thus desirable. We propose imitation refinement, a novel approach to refine imperfect input patterns, guided by a pre-trained classifier incorporating prior knowledge from simulated theoretical data, such that the refined patterns imitate the ideal data. The classifier is trained on the ideal simulated data to classify patterns and learns an embedding space where each class is represented by a prototype. The refiner learns to refine the imperfect patterns with small modifications, such that their embeddings are closer to the corresponding prototypes. We show that the refiner can be trained in both supervised and unsupervised fashions. We further illustrate the effectiveness of the proposed approach both qualitatively and quantitatively in a digit refinement task and an X-ray diffraction pattern refinement task in materials discovery.
Synthesized speech from articulatory movements can have real-world use for patients with vocal cord disorders, situations requiring silent speech, or in high-noise environments. In this work, we present EMA2S, an end-to-end multimodal articulatory-to -speech system that directly converts articulatory movements to speech signals. We use a neural-network-based vocoder combined with multimodal joint-training, incorporating spectrogram, mel-spectrogram, and deep features. The experimental results confirm that the multimodal approach of EMA2S outperforms the baseline system in terms of both objective evaluation and subjective evaluation metrics. Moreover, results demonstrate that joint mel-spectrogram and deep feature loss training can effectively improve system performance.
Due to the need to store the intermediate activations for back-propagation, end-to-end (E2E) training of deep networks usually suffers from high GPUs memory footprint. This paper aims to address this problem by revisiting the locally supervised learn ing, where a network is split into gradient-isolated modules and trained with local supervision. We experimentally show that simply training local modules with E2E loss tends to collapse task-relevant information at early layers, and hence hurts the performance of the full model. To avoid this issue, we propose an information propagation (InfoPro) loss, which encourages local modules to preserve as much useful information as possible, while progressively discard task-irrelevant information. As InfoPro loss is difficult to compute in its original form, we derive a feasible upper bound as a surrogate optimization objective, yielding a simple but effective algorithm. In fact, we show that the proposed method boils down to minimizing the combination of a reconstruction loss and a normal cross-entropy/contrastive term. Extensive empirical results on five datasets (i.e., CIFAR, SVHN, STL-10, ImageNet and Cityscapes) validate that InfoPro is capable of achieving competitive performance with less than 40% memory footprint compared to E2E training, while allowing using training data with higher-resolution or larger batch sizes under the same GPU memory constraint. Our method also enables training local modules asynchronously for potential training acceleration. Code is available at: https://github.com/blackfeather-wang/InfoPro-Pytorch.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا