ترغب بنشر مسار تعليمي؟ اضغط هنا

Unsupervised Neural Networks for Quantum Eigenvalue Problems

125   0   0.0 ( 0 )
 نشر من قبل Marios Mattheakis M
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Eigenvalue problems are critical to several fields of science and engineering. We present a novel unsupervised neural network for discovering eigenfunctions and eigenvalues for differential eigenvalue problems with solutions that identically satisfy the boundary conditions. A scanning mechanism is embedded allowing the method to find an arbitrary number of solutions. The network optimization is data-free and depends solely on the predictions. The unsupervised method is used to solve the quantum infinite well and quantum oscillator eigenvalue problems.

قيم البحث

اقرأ أيضاً

Polynomial eigenvalue problems (PEPs) arise in a variety of science and engineering applications, and many breakthroughs in the development of classical algorithms to solve PEPs have been made in the past decades. Here we attempt to solve PEPs in a q uantum computer. Firstly, for generalized eigenvalue problems (GEPs) $Ax = lambda Bx$ with $A,B$ symmetric, and $B$ positive definite, we give a quantum algorithm based on block-encoding and quantum phase estimation. In a more general case when $B$ is invertible, $B^{-1}A$ is diagonalizable and all the eigenvalues are real, we propose a quantum algorithm based on the Fourier spectral method to solve ordinary differential equations (ODEs). The inputs of our algorithms can be any desired states, and the outputs are superpositions of the eigenpairs. The complexities are polylog in the matrix size and linear in the precision. The dependence on precision is optimal. Secondly, we show that when $B$ is singular, any quantum algorithm uses at least $Omega(sqrt{n})$ queries to compute the eigenvalues, where $n$ is the matrix size. Thirdly, based on the linearization method and the connection between PEPs and higher-order ODEs, we provide two quantum algorithms to solve PEPs by extending the quantum algorithm for GEPs. We also give detailed complexity analysis of the algorithm for two special types of quadratic eigenvalue problems that are important in practice. Finally, under an extra assumption, we propose a quantum algorithm to solve PEPs when the eigenvalues are complex.
There has been a wave of interest in applying machine learning to study dynamical systems. In particular, neural networks have been applied to solve the equations of motion, and therefore, track the evolution of a system. In contrast to other applica tions of neural networks and machine learning, dynamical systems possess invariants such as energy, momentum, and angular momentum, depending on their underlying symmetries. Traditional numerical integration methods sometimes violate these conservation laws, propagating errors in time, ultimately reducing the predictability of the method. We present a data-free Hamiltonian neural network that solves the differential equations that govern dynamical systems. This is an equation-driven unsupervised learning method where the optimization process of the network depends solely on the predicted functions without using any ground truth data. This unsupervised model learns solutions that satisfy identically, up to an arbitrarily small error, Hamiltons equations and, therefore, conserve the Hamiltonian invariants. Once the network is optimized, the proposed architecture is considered a symplectic unit due to the introduction of an efficient parametric form of solutions. In addition, the choice of an appropriate activation function drastically improves the predictability of the network. An error analysis is derived and states that the numerical errors depend on the overall network performance. The symplectic architecture is then employed to solve the equations for the nonlinear oscillator and the chaotic Henon-Heiles dynamical system. In both systems, a symplectic Euler integrator requires two orders more evaluation points than the Hamiltonian network in order to achieve the same order of the numerical error in the predicted phase space trajectories.
Multifidelity simulation methodologies are often used in an attempt to judiciously combine low-fidelity and high-fidelity simulation results in an accuracy-increasing, cost-saving way. Candidates for this approach are simulation methodologies for whi ch there are fidelity differences connected with significant computational cost differences. Physics-informed Neural Networks (PINNs) are candidates for these types of approaches due to the significant difference in training times required when different fidelities (expressed in terms of architecture width and depth as well as optimization criteria) are employed. In this paper, we propose a particular multifidelity approach applied to PINNs that exploits low-rank structure. We demonstrate that width, depth, and optimization criteria can be used as parameters related to model fidelity, and show numerical justification of cost differences in training due to fidelity parameter choices. We test our multifidelity scheme on various canonical forward PDE models that have been presented in the emerging PINNs literature.
The earth system is exceedingly complex and often chaotic in nature, making prediction incredibly challenging: we cannot expect to make perfect predictions all of the time. Instead, we look for specific states of the system that lead to more predicta ble behavior than others, often termed forecasts of opportunity. When these opportunities are not present, scientists need prediction systems that are capable of saying I dont know. We introduce a novel loss function, termed the NotWrong loss, that allows neural networks to identify forecasts of opportunity for classification problems. The NotWrong loss introduces an abstention class that allows the network to identify the more confident samples and abstain (say I dont know) on the less confident samples. The abstention loss is designed to abstain on a user-defined fraction of the samples via a PID controller. Unlike many machine learning methods used to reject samples post-training, the NotWrong loss is applied during training to preferentially learn from the more confident samples. We show that the NotWrong loss outperforms other existing loss functions for multiple climate use cases. The implementation of the proposed loss function is straightforward in most network architectures designed for classification as it only requires the addition of an abstention class to the output layer and modification of the loss function.
The Fermionic Neural Network (FermiNet) is a recently-developed neural network architecture that can be used as a wavefunction Ansatz for many-electron systems, and has already demonstrated high accuracy on small systems. Here we present several impr ovements to the FermiNet that allow us to set new records for speed and accuracy on challenging systems. We find that increasing the size of the network is sufficient to reach chemical accuracy on atoms as large as argon. Through a combination of implementing FermiNet in JAX and simplifying several parts of the network, we are able to reduce the number of GPU hours needed to train the FermiNet on large systems by an order of magnitude. This enables us to run the FermiNet on the challenging transition of bicyclobutane to butadiene and compare against the PauliNet on the automerization of cyclobutadiene, and we achieve results near the state of the art for both.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا