Do you want to publish a course? Click here

Tensor network based machine learning of non-Markovian quantum processes

114   0   0.0 ( 0 )
 Added by Chu Guo
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

We show how to learn structures of generic, non-Markovian, quantum stochastic processes using a tensor network based machine learning algorithm. We do this by representing the process as a matrix product operator (MPO) and train it with a database of local input states at different times and the corresponding time-nonlocal output state. In particular, we analyze a qubit coupled to an environment and predict output state of the system at different time, as well as reconstruct the full system process. We show how the bond dimension of the MPO, a measure of non-Markovianity, depends on the properties of the system, of the environment and of their interaction. Hence, this study opens the way to a possible experimental investigation into the process tensor and its properties.



rate research

Read More

Machine learning methods have proved to be useful for the recognition of patterns in statistical data. The measurement outcomes are intrinsically random in quantum physics, however, they do have a pattern when the measurements are performed successively on an open quantum system. This pattern is due to the system-environment interaction and contains information about the relaxation rates as well as non-Markovian memory effects. Here we develop a method to extract the information about the unknown environment from a series of projective single-shot measurements on the system (without resorting to the process tomography). The method is based on embedding the non-Markovian system dynamics into a Markovian dynamics of the system and the effective reservoir of finite dimension. The generator of Markovian embedding is learned by the maximum likelihood estimation. We verify the method by comparing its prediction with an exactly solvable non-Markovian dynamics. The developed algorithm to learn unknown quantum environments enables one to efficiently control and manipulate quantum systems.
In the path integral formulation of the evolution of an open quantum system coupled to a Gaussian, non-interacting environment, the dynamical contribution of the latter is encoded in an object called the influence functional. Here, we relate the influence functional to the process tensor -- a more general representation of a quantum stochastic process -- describing the evolution. We then use this connection to motivate a tensor network algorithm for the simulation of multi-time correlations in open systems, building on recent work where the influence functional is represented in terms of time evolving matrix product operators. By exploiting the symmetries of the influence functional, we are able to use our algorithm to achieve orders-of-magnitude improvement in the efficiency of the resulting numerical simulation. Our improved algorithm is then applied to compute exact phonon emission spectra for the spin-boson model with strong coupling, demonstrating a significant divergence from spectra derived under commonly used assumptions of memorylessness.
79 - Bassano Vacchini 2019
The study of quantum dynamics featuring memory effects has always been a topic of interest within the theory of open quantum system, which is concerned about providing useful conceptual and theoretical tools for the description of the reduced dynamics of a system interacting with an external environment. Definitions of non-Markovian processes have been introduced trying to capture the notion of memory effect by studying features of the quantum dynamical map providing the evolution of the system states, or changes in the distinguishability of the system states themselves. We introduce basic notions in the framework of open quantum systems, stressing in particular analogies and differences with models used for introducing modifications of quantum mechanics which should help in dealing with the measurement problem. We further discuss recent developments in the treatment of non-Markovian processes and their role in considering more general modifications of quantum mechanics.
Neuroevolution, a field that draws inspiration from the evolution of brains in nature, harnesses evolutionary algorithms to construct artificial neural networks. It bears a number of intriguing capabilities that are typically inaccessible to gradient-based approaches, including optimizing neural-network architectures, hyperparameters, and even learning the training rules. In this paper, we introduce a quantum neuroevolution algorithm that autonomously finds near-optimal quantum neural networks for different machine learning tasks. In particular, we establish a one-to-one mapping between quantum circuits and directed graphs, and reduce the problem of finding the appropriate gate sequences to a task of searching suitable paths in the corresponding graph as a Markovian process. We benchmark the effectiveness of the introduced algorithm through concrete examples including classifications of real-life images and symmetry-protected topological states. Our results showcase the vast potential of neuroevolution algorithms in quantum machine learning, which would boost the exploration towards quantum learning supremacy with noisy intermediate-scale quantum devices.
86 - Zidu Liu , Li-Wei Yu , L.-M. Duan 2021
Tensor networks are efficient representations of high-dimensional tensors with widespread applications in quantum many-body physics. Recently, they have been adapted to the field of machine learning, giving rise to an emergent research frontier that has attracted considerable attention. Here, we study the trainability of tensor-network based machine learning models by exploring the landscapes of different loss functions, with a focus on the matrix product states (also called tensor trains) architecture. In particular, we rigorously prove that barren plateaus (i.e., exponentially vanishing gradients) prevail in the training process of the machine learning algorithms with global loss functions. Whereas, for local loss functions the gradients with respect to variational parameters near the local observables do not vanish as the system size increases. Therefore, the barren plateaus are absent in this case and the corresponding models could be efficiently trainable. Our results reveal a crucial aspect of tensor-network based machine learning in a rigorous fashion, which provide a valuable guide for both practical applications and theoretical studies in the future.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا