ترغب بنشر مسار تعليمي؟ اضغط هنا

We present a classical algorithm to find approximate solutions to instances of quadratic unconstrained binary optimisation. The algorithm can be seen as an analogue of quantum annealing under the restriction of a product state space, where the dynami cal evolution in quantum annealing is replaced with a gradient-descent based method. This formulation is able to quickly find high-quality solutions to large-scale problem instances, and can naturally be accelerated by dedicated hardware such as graphics processing units. We benchmark our approach for large scale problem instances with tuneable hardness and planted solutions. We find that our algorithm offers a similar performance to current state of the art approaches within a comparably simple gradient-based and non-stochastic setting.
Machine learning (ML) techniques applied to quantum many-body physics have emerged as a new research field. While the numerical power of this approach is undeniable, the most expressive ML algorithms, such as neural networks, are black boxes: The use r does neither know the logic behind the model predictions nor the uncertainty of the model predictions. In this work, we present a toolbox for interpretability and reliability, agnostic of the model architecture. In particular, it provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an extrapolation score for the model predictions. Such a toolbox only requires a single computation of the Hessian of the training loss function. Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
Near-term quantum devices can be used to build quantum machine learning models, such as quantum kernel methods and quantum neural networks (QNN) to perform classification tasks. There have been many proposals how to use variational quantum circuits a s quantum perceptrons or as QNNs. The aim of this work is to systematically compare different QNN architectures and to evaluate their relative expressive power with a teacher-student scheme. Specifically, the teacher model generates the datasets mapping random inputs to outputs which then have to be learned by the student models. This way, we avoid training on arbitrary data sets and allow to compare the learning capacity of different models directly via the loss, the prediction map, the accuracy and the relative entropy between the prediction maps. We focus particularly on a quantum perceptron model inspired by the recent work of Tacchino et. al. cite{Tacchino1} and compare it to the data re-uploading scheme that was originally introduced by Perez-Salinas et. al. cite{data_re-uploading}. We discuss alterations of the perceptron model and the formation of deep QNN to better understand the role of hidden units and non-linearities in these architectures.
Variational Quantum Algorithms have emerged as a leading paradigm for near-term quantum computation. In such algorithms, a parameterized quantum circuit is controlled via a classical optimization method that seeks to minimize a problem-dependent cost function. Although such algorithms are powerful in principle, the non-convexity of the associated cost landscapes and the prevalence of local minima means that local optimization methods such as gradient descent typically fail to reach good solutions. In this work we suggest a method to improve gradient-based approaches to variational quantum circuit optimization, which involves coupling the output of the quantum circuit to a classical neural network. The effect of this neural network is to peturb the cost landscape as a function of its parameters, so that local minima can be escaped or avoided via a modification to the cost landscape itself. We present two algorithms within this framework and numerically benchmark them on small instances of the Max-Cut optimization problem. We show that the method is able to reach deeper minima and lower cost values than standard gradient descent based approaches. Moreover, our algorithms require essentially the same number of quantum circuit evaluations per optimization step as the standard approach since, unlike the gradient with respect to the circuit, the neural network updates can be estimated in parallel via the backpropagation method. More generally, our approach suggests that relaxing the cost landscape is a fruitful path to improving near-term quantum computing algorithms.
We demonstrate how to explore phase diagrams with automated and unsupervised machine learning to find regions of interest for possible new phases. In contrast to supervised learning, where data is classified using predetermined labels, we here perfor m anomaly detection, where the task is to differentiate a normal data set, composed of one or several classes, from anomalous data. Asa paradigmatic example, we explore the phase diagram of the extended Bose Hubbard model in one dimension at exact integer filling and employ deep neural networks to determine the entire phase diagram in a completely unsupervised and automated fashion. As input data for learning, we first use the entanglement spectra and central tensors derived from tensor-networks algorithms for ground-state computation and later we extend our method and use experimentally accessible data such as low-order correlation functions as inputs. Our method allows us to reveal a phase-separated region between supersolid and superfluid parts with unexpected properties, which appears in the system in addition to the standard superfluid, Mott insulator, Haldane-insulating, and density wave phases.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا