ترغب بنشر مسار تعليمي؟ اضغط هنا

Avoiding local minima in Variational Quantum Algorithms with Neural Networks

69   0   0.0 ( 0 )
 نشر من قبل Javier Rivera-Dean
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Variational Quantum Algorithms have emerged as a leading paradigm for near-term quantum computation. In such algorithms, a parameterized quantum circuit is controlled via a classical optimization method that seeks to minimize a problem-dependent cost function. Although such algorithms are powerful in principle, the non-convexity of the associated cost landscapes and the prevalence of local minima means that local optimization methods such as gradient descent typically fail to reach good solutions. In this work we suggest a method to improve gradient-based approaches to variational quantum circuit optimization, which involves coupling the output of the quantum circuit to a classical neural network. The effect of this neural network is to peturb the cost landscape as a function of its parameters, so that local minima can be escaped or avoided via a modification to the cost landscape itself. We present two algorithms within this framework and numerically benchmark them on small instances of the Max-Cut optimization problem. We show that the method is able to reach deeper minima and lower cost values than standard gradient descent based approaches. Moreover, our algorithms require essentially the same number of quantum circuit evaluations per optimization step as the standard approach since, unlike the gradient with respect to the circuit, the neural network updates can be estimated in parallel via the backpropagation method. More generally, our approach suggests that relaxing the cost landscape is a fruitful path to improving near-term quantum computing algorithms.

قيم البحث

اقرأ أيضاً

Applications such as simulating large quantum systems or solving large-scale linear algebra problems are immensely challenging for classical computers due their extremely high computational cost. Quantum computers promise to unlock these applications , although fault-tolerant quantum computers will likely not be available for several years. Currently available quantum devices have serious constraints, including limited qubit numbers and noise processes that limit circuit depth. Variational Quantum Algorithms (VQAs), which employ a classical optimizer to train a parametrized quantum circuit, have emerged as a leading strategy to address these constraints. VQAs have now been proposed for essentially all applications that researchers have envisioned for quantum computers, and they appear to the best hope for obtaining quantum advantage. Nevertheless, challenges remain including the trainability, accuracy, and efficiency of VQAs. In this review article we present an overview of the field of VQAs. Furthermore, we discuss strategies to overcome their challenges as well as the exciting prospects for using them as a means to obtain quantum advantage.
Variational quantum algorithms (VQAs) have the potential of utilizing near-term quantum machines to gain certain computational advantages over classical methods. Nevertheless, modern VQAs suffer from cumbersome computational overhead, hampered by the tradition of employing a solitary quantum processor to handle large-volume data. As such, to better exert the superiority of VQAs, it is of great significance to improve their runtime efficiency. Here we devise an efficient distributed optimization scheme, called QUDIO, to address this issue. Specifically, in QUDIO, a classical central server partitions the learning problem into multiple subproblems and allocate them to multiple local nodes where each of them consists of a quantum processor and a classical optimizer. During the training procedure, all local nodes proceed parallel optimization and the classical server synchronizes optimization information among local nodes timely. In doing so, we prove a sublinear convergence rate of QUDIO in terms of the number of global iteration under the ideal scenario, while the system imperfection may incur divergent optimization. Numerical results on standard benchmarks demonstrate that QUDIO can surprisingly achieve a superlinear runtime speedup with respect to the number of local nodes. Our proposal can be readily mixed with other advanced VQAs-based techniques to narrow the gap between the state of the art and applications with quantum advantage.
We show that nonlinear problems including nonlinear partial differential equations can be efficiently solved by variational quantum computing. We achieve this by utilizing multiple copies of variational quantum states to treat nonlinearities efficien tly and by introducing tensor networks as a programming paradigm. The key concepts of the algorithm are demonstrated for the nonlinear Schr{o}dinger equation as a canonical example. We numerically show that the variational quantum ansatz can be exponentially more efficient than matrix product states and present experimental proof-of-principle results obtained on an IBM Q device.
69 - Eric R. Anschuetz 2021
One of the most important properties of classical neural networks is the clustering of local minima of the network near the global minimum, enabling efficient training. This has been observed not only numerically, but also has begun to be analyticall y understood through the lens of random matrix theory. Inspired by these results in classical machine learning, we show that a certain randomized class of variational quantum algorithms can be mapped to Wishart random fields on the hypertorus. Then, using the statistical properties of such random processes, we analytically find the expected distribution of critical points. Unlike the case for deep neural networks, we show the existence of a transition in the quality of local minima at a number of parameters exponentially large in the problem size. Below this transition, all local minima are concentrated far from the global minimum; above, all local minima are concentrated near the global minimum. This is consistent with previously observed numerical results on the landscape behavior of Hamiltonian agnostic variational quantum algorithms. We give a heuristic explanation as to why ansatzes that depend on the problem Hamiltonian might not suffer from these scaling issues. We also verify that our analytic results hold experimentally even at modest system sizes.
Variational quantum algorithms (VQAs) are promising methods that leverage noisy quantum computers and classical computing techniques for practical applications. In VQAs, the classical optimizers such as gradient-based optimizers are utilized to adjus t the parameters of the quantum circuit so that the objective function is minimized. However, they often suffer from the so-called vanishing gradient or barren plateau issue. On the other hand, the normalized gradient descent (NGD) method, which employs the normalized gradient vector to update the parameters, has been successfully utilized in several optimization problems. Here, we study the performance of the NGD methods in the optimization of VQAs for the first time. Our goal is two-fold. The first is to examine the effectiveness of NGD and its variants for overcoming the vanishing gradient problems. The second is to propose a new NGD that can attain the faster convergence than the ordinary NGD. We performed numerical simulations of these gradient-based optimizers in the context of quantum chemistry where VQAs are used to find the ground state of a given Hamiltonian. The results show the effective convergence property of the NGD methods in VQAs, compared to the relevant optimizers without normalization. Moreover, we make use of some normalized gradient vectors at the past iteration steps to propose the novel historical NGD that has a theoretical guarantee to accelerate the convergence speed, which is observed in the numerical experiments as well.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا