No Arabic abstract
Noise and decoherence are two major obstacles to the implementation of large-scale quantum computing. Because of the no-cloning theorem, which says we cannot make an exact copy of an arbitrary quantum state, simple redundancy will not work in a quantum context, and unwanted interactions with the environment can destroy coherence and thus the quantum nature of the computation. Because of the parallel and distributed nature of classical neural networks, they have long been successfully used to deal with incomplete or damaged data. In this work, we show that our model of a quantum neural network (QNN) is similarly robust to noise, and that, in addition, it is robust to decoherence. Moreover, robustness to noise and decoherence is not only maintained but improved as the size of the system is increased. Noise and decoherence may even be of advantage in training, as it helps correct for overfitting. We demonstrate the robustness using entanglement as a means for pattern storage in a qubit array. Our results provide evidence that machine learning approaches can obviate otherwise recalcitrant problems in quantum computing.
In previous work, we have proposed an entanglement indicator for a general multiqubit state, which can be learned by a quantum system, acting as a neural network. The indicator can be used for a pure or a mixed state, and it need not be close to any particular state; moreover, as the size of the system grows, the amount of additional training necessary diminishes. Here, we show that the indicator is stable to noise and decoherence.
The realization of a network of quantum registers is an outstanding challenge in quantum science and technology. We experimentally investigate a network node that consists of a single nitrogen-vacancy (NV) center electronic spin hyperfine-coupled to nearby nuclear spins. We demonstrate individual control and readout of five nuclear spin qubits within one node. We then characterize the storage of quantum superpositions in individual nuclear spins under repeated application of a probabilistic optical inter-node entangling protocol. We find that the storage fidelity is limited by dephasing during the electronic spin reset after failed attempts. By encoding quantum states into a decoherence-protected subspace of two nuclear spins we show that quantum coherence can be maintained for over 1000 repetitions of the remote entangling protocol. These results and insights pave the way towards remote entanglement purification and the realisation of a quantum repeater using NV center quantum network nodes.
We consider the description of quantum noise within the framework of the standard Copenhagen interpretation of quantum mechanics applied to a composite system environment setting. Averaging over the environmental degrees of freedom leads to a stochastic quantum dynamics, described by equations complying with the constraints arising from the statistical structure of quantum mechanics. Simple examples are considered in the framework of open system dynamics described within a master equation approach, pointing in particular to the appearance of the phenomenon of decoherence and to the relevance of quantum correlation functions of the environment in the determination of the action of quantum noise.
We propose to use neural networks to estimate the rates of coherent and incoherent processes in quantum systems from continuous measurement records. In particular, we adapt an image recognition algorithm to recognize the patterns in experimental signals and link them to physical quantities. We demonstrate that the parameter estimation works unabatedly in the presence of detector imperfections which complicate or rule out Bayesian filter analyses.
Near-term quantum devices can be used to build quantum machine learning models, such as quantum kernel methods and quantum neural networks (QNN) to perform classification tasks. There have been many proposals how to use variational quantum circuits as quantum perceptrons or as QNNs. The aim of this work is to systematically compare different QNN architectures and to evaluate their relative expressive power with a teacher-student scheme. Specifically, the teacher model generates the datasets mapping random inputs to outputs which then have to be learned by the student models. This way, we avoid training on arbitrary data sets and allow to compare the learning capacity of different models directly via the loss, the prediction map, the accuracy and the relative entropy between the prediction maps. We focus particularly on a quantum perceptron model inspired by the recent work of Tacchino et. al. cite{Tacchino1} and compare it to the data re-uploading scheme that was originally introduced by Perez-Salinas et. al. cite{data_re-uploading}. We discuss alterations of the perceptron model and the formation of deep QNN to better understand the role of hidden units and non-linearities in these architectures.