ﻻ يوجد ملخص باللغة العربية
We study the storage capacity of quantum neural networks (QNNs) described as completely positive trace preserving (CPTP) maps, which act on an $N$-dimensional Hilbert space. We demonstrate that QNNs can store up to $N$ linearly independent pure states and provide the structure of the corresponding maps. While the storage capacity of a classical Hopfield network scales linearly with the number of neurons, we show that QNNs can store an exponential number of linearly independent states. We estimate, employing the Gardner program, the relative volume of CPTP maps with $M$ stationary states. The volume decreases exponentially with $M$ and shrinks to zero for $Mgeq N+1$. We generalize our results to QNNs storing mixed states as well as input-output relations for feed-forward QNNs. Our approach opens the path to relate storage properties of QNNs to the quantum properties of the input-output states. This paper is dedicated to the memory of Peter Wittek.
We study the storage of multiple phase-coded patterns as stable dynamical attractors in recurrent neural networks with sparse connectivity. To determine the synaptic strength of existent connections and store the phase-coded patterns, we introduce a
Bipartite operations underpin both classical communication and entanglement generation. Using a superposition of classical messages, we show that the capacity of a two-qubit operation for error-free entanglement-assisted bidirectional classical commu
Reinforcement learning with neural networks (RLNN) has recently demonstrated great promise for many problems, including some problems in quantum information theory. In this work, we apply RLNN to quantum hypothesis testing and determine the optimal m
We introduce Quantum Graph Neural Networks (QGNN), a new class of quantum neural network ansatze which are tailored to represent quantum processes which have a graph structure, and are particularly suitable to be executed on distributed quantum syste
Quantum machine learning promises great speedups over classical algorithms, but it often requires repeated computations to achieve a desired level of accuracy for its point estimates. Bayesian learning focuses more on sampling from posterior distribu