ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantum Optimization for Training Quantum Neural Networks

76   0   0.0 ( 0 )
 نشر من قبل Yidong Liao
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Training quantum neural networks (QNNs) using gradient-based or gradient-free classical optimisation approaches is severely impacted by the presence of barren plateaus in the cost landscapes. In this paper, we devise a framework for leveraging quantum optimisation algorithms to find optimal parameters of QNNs for certain tasks. To achieve this, we coherently encode the cost function of QNNs onto relative phases of a superposition state in the Hilbert space of the network parameters. The parameters are tuned with an iterative quantum optimisation structure using adaptively selected Hamiltonians. The quantum mechanism of this framework exploits hidden structure in the QNN optimisation problem and hence is expected to provide beyond-Grover speed up, mitigating the barren plateau issue.



قيم البحث

اقرأ أيضاً

We introduce Quantum Graph Neural Networks (QGNN), a new class of quantum neural network ansatze which are tailored to represent quantum processes which have a graph structure, and are particularly suitable to be executed on distributed quantum syste ms over a quantum network. Along with this general class of ansatze, we introduce further specialized architectures, namely, Quantum Graph Recurrent Neural Networks (QGRNN) and Quantum Graph Convolutional Neural Networks (QGCNN). We provide four example applications of QGNNs: learning Hamiltonian dynamics of quantum systems, learning how to create multipartite entanglement in a quantum network, unsupervised learning for spectral clustering, and supervised learning for graph isomorphism classification.
Quantum machine learning promises great speedups over classical algorithms, but it often requires repeated computations to achieve a desired level of accuracy for its point estimates. Bayesian learning focuses more on sampling from posterior distribu tions than on point estimation, thus it might be more forgiving in the face of additional quantum noise. We propose a quantum algorithm for Bayesian neural network inference, drawing on recent advances in quantum deep learning, and simulate its empirical performance on several tasks. We find that already for small numbers of qubits, our algorithm approximates the true posterior well, while it does not require any repeated computations and thus fully realizes the quantum speedups.
262 - G. Ferrini 2014
This work introduces optimization strategies to continuous variable measurement based quantum computation (MBQC) at different levels. We provide a recipe for mitigating the effects of finite squeezing, which affect the production of cluster states an d the result of a traditional MBQC. These strategies are readily implementable by several experimental groups. Furthermore, a more general scheme for MBQC is introduced that does not necessarily rely on the use of ancillary cluster states to achieve its aim, but rather on the detection of a resource state in a suitable mode basis followed by digital post-processing. A recipe is provided to optimize the adjustable parameters that are employed within this framework.
Deep quantum neural networks may provide a promising way to achieve quantum learning advantage with noisy intermediate scale quantum devices. Here, we use deep quantum feedforward neural networks capable of universal quantum computation to represent the mixed states for open quantum many-body systems and introduce a variational method with quantum derivatives to solve the master equation for dynamics and stationary states. Owning to the special structure of the quantum networks, this approach enjoys a number of notable features, including the absence of barren plateaus, efficient quantum analogue of the backpropagation algorithm, resource-saving reuse of hidden qubits, general applicability independent of dimensionality and entanglement properties, as well as the convenient implementation of symmetries. As proof-of-principle demonstrations, we apply this approach to both one-dimensional transverse field Ising and two-dimensional $J_1-J_2$ models with dissipation, and show that it can efficiently capture their dynamics and stationary states with a desired accuracy.
Deep learning has been shown to be able to recognize data patterns better than humans in specific circumstances or contexts. In parallel, quantum computing has demonstrated to be able to output complex wave functions with a few number of gate operati ons, which could generate distributions that are hard for a classical computer to produce. Here we propose a hybrid quantum-classical convolutional neural network (QCCNN), inspired by convolutional neural networks (CNNs) but adapted to quantum computing to enhance the feature mapping process. QCCNN is friendly to currently noisy intermediate-scale quantum computers, in terms of both number of qubits as well as circuits depths, while retaining important features of classical CNN, such as nonlinearity and scalability. We also present a framework to automatically compute the gradients of hybrid quantum-classical loss functions which could be directly applied to other hybrid quantum-classical algorithms. We demonstrate the potential of this architecture by applying it to a Tetris dataset, and show that QCCNN can accomplish classification tasks with learning accuracy surpassing that of classical CNN.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا