ترغب بنشر مسار تعليمي؟ اضغط هنا

A hybrid quantum-classical neural network with deep residual learning

76   0   0.0 ( 0 )
 نشر من قبل Yanying Liang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Inspired by the success of classical neural networks, there has been tremendous effort to develop classical effective neural networks into quantum concept. In this paper, a novel hybrid quantum-classical neural network with deep residual learning (Res-HQCNN) is proposed. We firstly analysis how to connect residual block structure with a quantum neural network, and give the corresponding training algorithm. At the same time, the advantages and disadvantages of transforming deep residual learning into quantum concept are provided. As a result, the model can be trained in an end-to-end fashion, analogue to the backpropagation in classical neural networks. To explore the effectiveness of Res-HQCNN , we perform extensive experiments for quantum data with or without noisy on classical computer. The experimental results show the Res-HQCNN performs better to learn an unknown unitary transformation and has stronger robustness for noisy data, when compared to state of the arts. Moreover, the possible methods of combining residual learning with quantum neural networks are also discussed.

قيم البحث

اقرأ أيضاً

The high energy physics (HEP) community has a long history of dealing with large-scale datasets. To manage such voluminous data, classical machine learning and deep learning techniques have been employed to accelerate physics discovery. Recent advanc es in quantum machine learning (QML) have indicated the potential of applying these techniques in HEP. However, there are only limited results in QML applications currently available. In particular, the challenge of processing sparse data, common in HEP datasets, has not been extensively studied in QML models. This research provides a hybrid quantum-classical graph convolutional network (QGCNN) for learning HEP data. The proposed framework demonstrates an advantage over classical multilayer perceptron and convolutional neural networks in the aspect of number of parameters. Moreover, in terms of testing accuracy, the QGCNN shows comparable performance to a quantum convolutional neural network on the same HEP dataset while requiring less than $50%$ of the parameters. Based on numerical simulation results, studying the application of graph convolutional operations and other QML models may prove promising in advancing HEP research and other scientific fields.
Operational forecasting centers are investing in decadal (1-10 year) forecast systems to support long-term decision making for a more climate-resilient society. One method that has previously been employed is the Dynamic Mode Decomposition (DMD) algo rithm - also known as the Linear Inverse Model - which fits linear dynamical models to data. While the DMD usually approximates non-linear terms in the true dynamics as a linear system with random noise, we investigate an extension to the DMD that explicitly represents the non-linear terms as a neural network. Our weight initialization allows the network to produce sensible results before training and then improve the prediction after training as data becomes available. In this short paper, we evaluate the proposed architecture for simulating global sea surface temperatures and compare the results with the standard DMD and seasonal forecasts produced by the state-of-the-art dynamical model, CFSv2.
We revisit residual algorithms in both model-free and model-based reinforcement learning settings. We propose the bidirectional target network technique to stabilize residual algorithms, yielding a residual version of DDPG that significantly outperfo rms vanilla DDPG in the DeepMind Control Suite benchmark. Moreover, we find the residual algorithm an effective approach to the distribution mismatch problem in model-based planning. Compared with the existing TD($k$) method, our residual-based method makes weaker assumptions about the model and yields a greater performance boost.
91 - Kun Lei , Peng Guo , Yi Wang 2021
For NP-hard combinatorial optimization problems, it is usually difficult to find high-quality solutions in polynomial time. The design of either an exact algorithm or an approximate algorithm for these problems often requires significantly specialize d knowledge. Recently, deep learning methods provide new directions to solve such problems. In this paper, an end-to-end deep reinforcement learning framework is proposed to solve this type of combinatorial optimization problems. This framework can be applied to different problems with only slight changes of input (for example, for a traveling salesman problem (TSP), the input is the two-dimensional coordinates of nodes; while for a capacity-constrained vehicle routing problem (CVRP), the input is simply changed to three-dimensional vectors including the two-dimensional coordinates and the customer demands of nodes), masks and decoder context vectors. The proposed framework is aiming to improve the models in literacy in terms of the neural network model and the training algorithm. The solution quality of TSP and the CVRP up to 100 nodes are significantly improved via our framework. Specifically, the average optimality gap is reduced from 4.53% (reported best cite{R22}) to 3.67% for TSP with 100 nodes and from 7.34% (reported best cite{R22}) to 6.68% for CVRP with 100 nodes when using the greedy decoding strategy. Furthermore, our framework uses about 1/3$sim$3/4 training samples compared with other existing learning methods while achieving better results. The results performed on randomly generated instances and the benchmark instances from TSPLIB and CVRPLIB confirm that our framework has a linear running time on the problem size (number of nodes) during the testing phase, and has a good generalization performance from random instance training to real-world instance testing.
Graph neural networks (GNNs) are naturally distributed architectures for learning representations from network data. This renders them suitable candidates for decentralized tasks. In these scenarios, the underlying graph often changes with time due t o link failures or topology variations, creating a mismatch between the graphs on which GNNs were trained and the ones on which they are tested. Online learning can be leveraged to retrain GNNs at testing time to overcome this issue. However, most online algorithms are centralized and usually offer guarantees only on convex problems, which GNNs rarely lead to. This paper develops the Wide and Deep GNN (WD-GNN), a novel architecture that can be updated with distributed online learning mechanisms. The WD-GNN consists of two components: the wide part is a linear graph filter and the deep part is a nonlinear GNN. At training time, the joint wide and deep architecture learns nonlinear representations from data. At testing time, the wide, linear part is retrained, while the deep, nonlinear one remains fixed. This often leads to a convex formulation. We further propose a distributed online learning algorithm that can be implemented in a decentralized setting. We also show the stability of the WD-GNN to changes of the underlying graph and analyze the convergence of the proposed online learning procedure. Experiments on movie recommendation, source localization and robot swarm control corroborate theoretical findings and show the potential of the WD-GNN for distributed online learning.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا