Do you want to publish a course? Click here

Mutual Information Scaling for Tensor Network Machine Learning

64   0   0.0 ( 0 )
 Added by Ian Convy
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Tensor networks have emerged as promising tools for machine learning, inspired by their widespread use as variational ansatzes in quantum many-body physics. It is well known that the success of a given tensor network ansatz depends in part on how well it can reproduce the underlying entanglement structure of the target state, with different network designs favoring different scaling patterns. We demonstrate here how a related correlation analysis can be applied to tensor network machine learning, and explore whether classical data possess correlation scaling patterns similar to those found in quantum states. We utilize mutual information as a natural analog to entanglement for classical data, and show that it can serve as a lower-bound on the network entanglement needed for probabilistic classification. We then develop a logistic regression algorithm to estimate the mutual information between bipartitions of data features, and verify its accuracy on a set of Gaussian distributions designed to mimic different correlation patterns. Using this algorithm, we characterize the scaling patterns in the MNIST and Tiny Images datasets, and find clear evidence of boundary-law scaling in the latter.

rate research

Read More

113 - Chu Guo , Kavan Modi , 2020
We show how to learn structures of generic, non-Markovian, quantum stochastic processes using a tensor network based machine learning algorithm. We do this by representing the process as a matrix product operator (MPO) and train it with a database of local input states at different times and the corresponding time-nonlocal output state. In particular, we analyze a qubit coupled to an environment and predict output state of the system at different time, as well as reconstruct the full system process. We show how the bond dimension of the MPO, a measure of non-Markovianity, depends on the properties of the system, of the environment and of their interaction. Hence, this study opens the way to a possible experimental investigation into the process tensor and its properties.
We describe a quantum-assisted machine learning (QAML) method in which multivariate data is encoded into quantum states in a Hilbert space whose dimension is exponentially large in the length of the data vector. Learning in this space occurs through applying a low-depth quantum circuit with a tree tensor network (TTN) topology, which acts as an unsupervised feature extractor to identify the most relevant quantum states in a data-driven fashion. This unsupervised feature extractor then feeds a supervised linear classifier and encodes the output in a small-dimensional quantum register. In contrast to previous work on emph{quantum-inspired} TTN classifiers, in which the embedding map and class decision weights did not map the data to well-defined quantum states, we present an approach that can be implemented on gate-based quantum computing devices. In particular, we identify an embedding map with accuracy similar to exponential machines (Novikov emph{et al.}, arXiv:1605.03795), but which produces valid quantum states from classical data vectors, and utilize manifold-based gradient optimization schemes to produce isometric operations mapping quantum states to a register of qubits defining a class decision. We detail methods for efficiently obtaining one- and two-point correlation functions of the decision boundary vectors of the quantum model, which can be used for model interpretability, as well as methods for obtaining classifications from partial data vectors. Further, we show that the use of isometric tensors can significantly aid in the human interpretability of the correlation functions extracted from the decision weights, and may produce models that are less susceptible to adversarial perturbations. We demonstrate our methodologies in applications utilizing the MNIST handwritten digit dataset and a multivariate timeseries dataset of human activity recognition.
We study the behavior of the mutual information (MI) in various quadratic fermionic chains, with and without pairing terms and both with short- and long-range hoppings. The models considered include the short-range Kitaev model and also cases in which the area law for the entanglement entropy is - logarithmically or non-logarithmically - violated. When the area law is violated at most logarithmically, the MI is a monotonically increasing function of the conformal four-point ratio x, also for the Kitaev model. Where non-logarithmic violations of the area law are present, then non-monotonic features of MI can be observed, with a structure of peaks related to the spatial configuration of Bell pairs, and the four-point ratio x is found to be not sufficient to capture the whole structure of the MI. For the model exhibiting perfect volume law, the MI vanishes identically. For the Kitaev model, when it is gapped or the range of the pairing is large enough, then the results have vanishing MI for small x. A discussion of the comparison with the results obtained by the AdS/CFT correspondence in the strong coupling limit is presented.
86 - Zidu Liu , Li-Wei Yu , L.-M. Duan 2021
Tensor networks are efficient representations of high-dimensional tensors with widespread applications in quantum many-body physics. Recently, they have been adapted to the field of machine learning, giving rise to an emergent research frontier that has attracted considerable attention. Here, we study the trainability of tensor-network based machine learning models by exploring the landscapes of different loss functions, with a focus on the matrix product states (also called tensor trains) architecture. In particular, we rigorously prove that barren plateaus (i.e., exponentially vanishing gradients) prevail in the training process of the machine learning algorithms with global loss functions. Whereas, for local loss functions the gradients with respect to variational parameters near the local observables do not vanish as the system size increases. Therefore, the barren plateaus are absent in this case and the corresponding models could be efficiently trainable. Our results reveal a crucial aspect of tensor-network based machine learning in a rigorous fashion, which provide a valuable guide for both practical applications and theoretical studies in the future.
The numerical simulation of quantum circuits is an indispensable tool for development, verification and validation of hybrid quantum-classical algorithms on near-term quantum co-processors. The emergence of exascale high-performance computing (HPC) platforms presents new opportunities for pushing the boundaries of quantum circuit simulation. We present a modernized version of the Tensor Network Quantum Virtual Machine (TNQVM) which serves as a quantum circuit simulation backend in the eXtreme-scale ACCelerator (XACC) framework. The new version is based on the general purpose, scalable tensor network processing library, ExaTN, and provides multiple configurable quantum circuit simulators enabling either exact quantum circuit simulation via the full tensor network contraction, or approximate quantum state representations via suitable tensor factorizations. Upon necessity, stochastic noise modeling from real quantum processors is incorporated into the simulations by modeling quantum channels with Kraus tensors. By combining the portable XACC quantum programming frontend and the scalable ExaTN numerical backend we introduce an end-to-end virtual quantum development environment which can scale from laptops to future exascale platforms. We report initial benchmarks of our framework which include a demonstration of the distributed execution, incorporation of quantum decoherence models, and simulation of the random quantum circuits used for the certification of quantum supremacy on the Google Sycamore superconducting architecture.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا