No Arabic abstract
Our work intends to show that: (1) Quantum Neural Networks (QNN) can be mapped onto spinnetworks, with the consequence that the level of analysis of their operation can be carried out on the side of Topological Quantum Field Theories (TQFT); (2) Deep Neural Networks (DNN) are a subcase of QNN, in the sense that they emerge as the semiclassical limit of QNN; (3) A number of Machine Learning (ML) key-concepts can be rephrased by using the terminology of TQFT. Our framework provides as well a working hypothesis for understanding the generalization behavior of DNN, relating it to the topological features of the graphs structures involved.
In this work, we address the question whether a sufficiently deep quantum neural network can approximate a target function as accurate as possible. We start with simple but typical physical situations that the target functions are physical observables, and then we extend our discussion to situations that the learning targets are not directly physical observables, but can be expressed as physical observables in an enlarged Hilbert space with multiple replicas, such as the Loshimidt echo and the Renyi entropy. The main finding is that an accurate approximation is possible only when the input wave functions in the dataset do not exhaust the entire Hilbert space that the quantum circuit acts on, and more precisely, the Hilbert space dimension of the former has to be less than half of the Hilbert space dimension of the latter. In some cases, this requirement can be satisfied automatically because of the intrinsic properties of the dataset, for instance, when the input wave function has to be symmetric between different replicas. And if this requirement cannot be satisfied by the dataset, we show that the expressivity capabilities can be restored by adding one ancillary qubit where the wave function is always fixed at input. Our studies point toward establishing a quantum neural network analogy of the universal approximation theorem that lays the foundation for expressivity of classical neural networks.
We propose a new framework to understand how quantum effects may impact on the dynamics of neural networks. We implement the dynamics of neural networks in terms of Markovian open quantum systems, which allows us to treat thermal and quantum coherent effects on the same footing. In particular, we propose an open quantum generalisation of the celebrated Hopfield neural network, the simplest toy model of associative memory. We determine its phase diagram and show that quantum fluctuations give rise to a qualitatively new non-equilibrium phase. This novel phase is characterised by limit cycles corresponding to high-dimensional stationary manifolds that may be regarded as a generalisation of storage patterns to the quantum domain.
In this letter we propose a general principle for how to build up a quantum neural network with high learning efficiency. Our stratagem is based on the equivalence between extracting information from input state to readout qubit and scrambling information from the readout qubit to input qubits. We characterize the quantum information scrambling by operator size growth, and by Haar random averaging over operator sizes, we propose an averaged operator size to describe the information scrambling ability for a given quantum neural network architectures, and argue this quantity is positively correlated with the learning efficiency of this architecture. As examples, we compute the averaged operator size for several different architectures, and we also consider two typical learning tasks, which are a regression task of a quantum problem and a classification task on classical images, respectively. In both cases, we find that, for the architecture with a larger averaged operator size, the loss function decreases faster or the prediction accuracy in the testing dataset increases faster as the training epoch increases, which means higher learning efficiency. Our results can be generalized to more complicated quantu
The Huckel Hamiltonian is an incredibly simple tight-binding model famed for its ability to capture qualitative physics phenomena arising from electron interactions in molecules and materials. Part of its simplicity arises from using only two types of empirically fit physics-motivated parameters: the first describes the orbital energies on each atom and the second describes electronic interactions and bonding between atoms. By replacing these traditionally static parameters with dynamically predicted values, we vastly increase the accuracy of the extended Huckel model. The dynamic values are generated with a deep neural network, which is trained to reproduce orbital energies and densities derived from density functional theory. The resulting model retains interpretability while the deep neural network parameterization is smooth, accurate, and reproduces insightful features of the original static parameterization. Finally, we demonstrate that the Huckel model, and not the deep neural network, is responsible for capturing intricate orbital interactions in two molecular case studies. Overall, this work shows the promise of utilizing machine learning to formulate simple, accurate, and dynamically parameterized physics models.
The dynamical behaviour of a weakly diluted fully-inhibitory network of pulse-coupled spiking neurons is investigated. Upon increasing the coupling strength, a transition from regular to stochastic-like regime is observed. In the weak-coupling phase, a periodic dynamics is rapidly approached, with all neurons firing with the same rate and mutually phase-locked. The strong-coupling phase is characterized by an irregular pattern, even though the maximum Lyapunov exponent is negative. The paradox is solved by drawing an analogy with the phenomenon of ``stable chaos, i.e. by observing that the stochastic-like behaviour is limited to a an exponentially long (with the system size) transient. Remarkably, the transient dynamics turns out to be stationary.