ترغب بنشر مسار تعليمي؟ اضغط هنا

The Presence and Absence of Barren Plateaus in Tensor-network Based Machine Learning

87   0   0.0 ( 0 )
 نشر من قبل Li-Wei Yu
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Tensor networks are efficient representations of high-dimensional tensors with widespread applications in quantum many-body physics. Recently, they have been adapted to the field of machine learning, giving rise to an emergent research frontier that has attracted considerable attention. Here, we study the trainability of tensor-network based machine learning models by exploring the landscapes of different loss functions, with a focus on the matrix product states (also called tensor trains) architecture. In particular, we rigorously prove that barren plateaus (i.e., exponentially vanishing gradients) prevail in the training process of the machine learning algorithms with global loss functions. Whereas, for local loss functions the gradients with respect to variational parameters near the local observables do not vanish as the system size increases. Therefore, the barren plateaus are absent in this case and the corresponding models could be efficiently trainable. Our results reveal a crucial aspect of tensor-network based machine learning in a rigorous fashion, which provide a valuable guide for both practical applications and theoretical studies in the future.



قيم البحث

اقرأ أيضاً

Quantum neural networks (QNNs) have generated excitement around the possibility of efficiently analyzing quantum data. But this excitement has been tempered by the existence of exponentially vanishing gradients, known as barren plateau landscapes, fo r many QNN architectures. Recently, Quantum Convolutional Neural Networks (QCNNs) have been proposed, involving a sequence of convolutional and pooling layers that reduce the number of qubits while preserving information about relevant data features. In this work we rigorously analyze the gradient scaling for the parameters in the QCNN architecture. We find that the variance of the gradient vanishes no faster than polynomially, implying that QCNNs do not exhibit barren plateaus. This provides an analytical guarantee for the trainability of randomly initialized QCNNs, which singles out QCNNs as being trainable unlike many other QNN architectures. To derive our results we introduce a novel graph-based method to analyze expectation values over Haar-distributed unitaries, which will likely be useful in other contexts. Finally, we perform numerical simulations to verify our analytical results.
We argue that an excess in entanglement between the visible and hidden units in a Quantum Neural Network can hinder learning. In particular, we show that quantum neural networks that satisfy a volume-law in the entanglement entropy will give rise to models not suitable for learning with high probability. Using arguments from quantum thermodynamics, we then show that this volume law is typical and that there exists a barren plateau in the optimization landscape due to entanglement. More precisely, we show that for any bounded objective function on the visible layers, the Lipshitz constants of the expectation value of that objective function will scale inversely with the dimension of the hidden-subsystem with high probability. We show how this can cause both gradient descent and gradient-free methods to fail. We note that similar problems can happen with quantum Boltzmann machines, although stronger assumptions on the coupling between the hidden/visible subspaces are necessary. We highlight how pretraining such generative models may provide a way to navigate these barren plateaus.
113 - Chu Guo , Kavan Modi , 2020
We show how to learn structures of generic, non-Markovian, quantum stochastic processes using a tensor network based machine learning algorithm. We do this by representing the process as a matrix product operator (MPO) and train it with a database of local input states at different times and the corresponding time-nonlocal output state. In particular, we analyze a qubit coupled to an environment and predict output state of the system at different time, as well as reconstruct the full system process. We show how the bond dimension of the MPO, a measure of non-Markovianity, depends on the properties of the system, of the environment and of their interaction. Hence, this study opens the way to a possible experimental investigation into the process tensor and its properties.
Barren plateau landscapes correspond to gradients that vanish exponentially in the number of qubits. Such landscapes have been demonstrated for variational quantum algorithms and quantum neural networks with either deep circuits or global cost functi ons. For obvious reasons, it is expected that gradient-based optimizers will be significantly affected by barren plateaus. However, whether or not gradient-free optimizers are impacted is a topic of debate, with some arguing that gradient-free approaches are unaffected by barren plateaus. Here we show that, indeed, gradient-free optimizers do not solve the barren plateau problem. Our main result proves that cost function differences, which are the basis for making decisions in a gradient-free optimization, are exponentially suppressed in a barren plateau. Hence, without exponential precision, gradient-free optimizers will not make progress in the optimization. We numerically confirm this by training in a barren plateau with several gradient-free optimizers (Nelder-Mead, Powell, and COBYLA algorithms), and show that the numbers of shots required in the optimization grows exponentially with the number of qubits.
Machine learning (ML) techniques applied to quantum many-body physics have emerged as a new research field. While the numerical power of this approach is undeniable, the most expressive ML algorithms, such as neural networks, are black boxes: The use r does neither know the logic behind the model predictions nor the uncertainty of the model predictions. In this work, we present a toolbox for interpretability and reliability, agnostic of the model architecture. In particular, it provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an extrapolation score for the model predictions. Such a toolbox only requires a single computation of the Hessian of the training loss function. Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا