Do you want to publish a course? Click here

Neuromorphic quantum computing

67   0   0.0 ( 0 )
 Added by Christian Pehle
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

We propose that neuromorphic computing can perform quantum operations. Spiking neurons in the active or silent states are connected to the two states of Ising spins. A quantum density matrix is constructed from the expectation values and correlations of the Ising spins. As a step towards quantum computation we show for a two qubit system that quantum gates can be learned as a change of parameters for neural network dynamics. Our proposal for probabilistic computing goes beyond Markov chains, which are based on transition probabilities. Constraints on classical probability distributions relate changes made in one part of the system to other parts, similar to entangled quantum systems.



rate research

Read More

Quantum neuromorphic computing physically implements neural networks in brain-inspired quantum hardware to speed up their computation. In this perspective article, we show that this emerging paradigm could make the best use of the existing and near future intermediate size quantum computers. Some approaches are based on parametrized quantum circuits, and use neural network-inspired algorithms to train them. Other approaches, closer to classical neuromorphic computing, take advantage of the physical properties of quantum oscillator assemblies to mimic neurons and compute. We discuss the different implementations of quantum neuromorphic networks with digital and analog circuits, highlight their respective advantages, and review exciting recent experimental results.
Neuromorphic computing takes inspiration from the brain to create energy efficient hardware for information processing, capable of highly sophisticated tasks. In this article, we make the case that building this new hardware necessitates reinventing electronics. We show that research in physics and material science will be key to create artificial nano-neurons and synapses, to connect them together in huge numbers, to organize them in complex systems, and to compute with them efficiently. We describe how some researchers choose to take inspiration from artificial intelligence to move forward in this direction, whereas others prefer taking inspiration from neuroscience, and we highlight recent striking results obtained with these two approaches. Finally, we discuss the challenges and perspectives in neuromorphic physics, which include developing the algorithms and the hardware hand in hand, making significant advances with small toy systems, as well as building large scale networks.
106 - I.I. Yusipov , T.V. Laptyeva , 2017
In a closed single-particle quantum system, spatial disorder induces Anderson localization of eigenstates and halts wave propagation. The phenomenon is vulnerable to interaction with environment and decoherence, that is believed to restore normal diffusion. We demonstrate that for a class of experimentally feasible non-Hermitian dissipators, which admit signatures of localization in asymptotic states, quantum particle opts between diffusive and ballistic regimes, depending on the phase parameter of dissipators, with sticking about localization centers. In diffusive regime, statistics of quantum jumps is non-Poissonian and has a power-law interval, a footprint of intermittent locking in Anderson modes. Ballistic propagation reflects dispersion of an ordered lattice and introduces a new timescale for jumps with non-monotonous probability distribution. Hermitian dephasing dissipation makes localization features vanish, and Poissonian jump statistics along with normal diffusion are recovered.
In this work, we address the question whether a sufficiently deep quantum neural network can approximate a target function as accurate as possible. We start with simple but typical physical situations that the target functions are physical observables, and then we extend our discussion to situations that the learning targets are not directly physical observables, but can be expressed as physical observables in an enlarged Hilbert space with multiple replicas, such as the Loshimidt echo and the Renyi entropy. The main finding is that an accurate approximation is possible only when the input wave functions in the dataset do not exhaust the entire Hilbert space that the quantum circuit acts on, and more precisely, the Hilbert space dimension of the former has to be less than half of the Hilbert space dimension of the latter. In some cases, this requirement can be satisfied automatically because of the intrinsic properties of the dataset, for instance, when the input wave function has to be symmetric between different replicas. And if this requirement cannot be satisfied by the dataset, we show that the expressivity capabilities can be restored by adding one ancillary qubit where the wave function is always fixed at input. Our studies point toward establishing a quantum neural network analogy of the universal approximation theorem that lays the foundation for expressivity of classical neural networks.
108 - Yadong Wu , Pengfei Zhang , 2020
In this letter we propose a general principle for how to build up a quantum neural network with high learning efficiency. Our stratagem is based on the equivalence between extracting information from input state to readout qubit and scrambling information from the readout qubit to input qubits. We characterize the quantum information scrambling by operator size growth, and by Haar random averaging over operator sizes, we propose an averaged operator size to describe the information scrambling ability for a given quantum neural network architectures, and argue this quantity is positively correlated with the learning efficiency of this architecture. As examples, we compute the averaged operator size for several different architectures, and we also consider two typical learning tasks, which are a regression task of a quantum problem and a classification task on classical images, respectively. In both cases, we find that, for the architecture with a larger averaged operator size, the loss function decreases faster or the prediction accuracy in the testing dataset increases faster as the training epoch increases, which means higher learning efficiency. Our results can be generalized to more complicated quantu
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا