ترغب بنشر مسار تعليمي؟ اضغط هنا

Hamiltonian neural networks for solving equations of motion

86   0   0.0 ( 0 )
 نشر من قبل Marios Mattheakis M
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

There has been a wave of interest in applying machine learning to study dynamical systems. In particular, neural networks have been applied to solve the equations of motion, and therefore, track the evolution of a system. In contrast to other applications of neural networks and machine learning, dynamical systems possess invariants such as energy, momentum, and angular momentum, depending on their underlying symmetries. Traditional numerical integration methods sometimes violate these conservation laws, propagating errors in time, ultimately reducing the predictability of the method. We present a data-free Hamiltonian neural network that solves the differential equations that govern dynamical systems. This is an equation-driven unsupervised learning method where the optimization process of the network depends solely on the predicted functions without using any ground truth data. This unsupervised model learns solutions that satisfy identically, up to an arbitrarily small error, Hamiltons equations and, therefore, conserve the Hamiltonian invariants. Once the network is optimized, the proposed architecture is considered a symplectic unit due to the introduction of an efficient parametric form of solutions. In addition, the choice of an appropriate activation function drastically improves the predictability of the network. An error analysis is derived and states that the numerical errors depend on the overall network performance. The symplectic architecture is then employed to solve the equations for the nonlinear oscillator and the chaotic Henon-Heiles dynamical system. In both systems, a symplectic Euler integrator requires two orders more evaluation points than the Hamiltonian network in order to achieve the same order of the numerical error in the predicted phase space trajectories.



قيم البحث

اقرأ أيضاً

Accurate numerical solutions for the Schrodinger equation are of utmost importance in quantum chemistry. However, the computational cost of current high-accuracy methods scales poorly with the number of interacting particles. Combining Monte Carlo me thods with unsupervised training of neural networks has recently been proposed as a promising approach to overcome the curse of dimensionality in this setting and to obtain accurate wavefunctions for individual molecules at a moderately scaling computational cost. These methods currently do not exploit the regularity exhibited by wavefunctions with respect to their molecular geometries. Inspired by recent successful applications of deep transfer learning in machine translation and computer vision tasks, we attempt to leverage this regularity by introducing a weight-sharing constraint when optimizing neural network-based models for different molecular geometries. That is, we restrict the optimization process such that up to 95 percent of weights in a neural network model are in fact equal across varying molecular geometries. We find that this technique can accelerate optimization when considering sets of nuclear geometries of the same molecule by an order of magnitude and that it opens a promising route towards pre-trained neural network wavefunctions that yield high accuracy even across different molecules.
Deep quantum neural networks may provide a promising way to achieve quantum learning advantage with noisy intermediate scale quantum devices. Here, we use deep quantum feedforward neural networks capable of universal quantum computation to represent the mixed states for open quantum many-body systems and introduce a variational method with quantum derivatives to solve the master equation for dynamics and stationary states. Owning to the special structure of the quantum networks, this approach enjoys a number of notable features, including the absence of barren plateaus, efficient quantum analogue of the backpropagation algorithm, resource-saving reuse of hidden qubits, general applicability independent of dimensionality and entanglement properties, as well as the convenient implementation of symmetries. As proof-of-principle demonstrations, we apply this approach to both one-dimensional transverse field Ising and two-dimensional $J_1-J_2$ models with dissipation, and show that it can efficiently capture their dynamics and stationary states with a desired accuracy.
Eigenvalue problems are critical to several fields of science and engineering. We present a novel unsupervised neural network for discovering eigenfunctions and eigenvalues for differential eigenvalue problems with solutions that identically satisfy the boundary conditions. A scanning mechanism is embedded allowing the method to find an arbitrary number of solutions. The network optimization is data-free and depends solely on the predictions. The unsupervised method is used to solve the quantum infinite well and quantum oscillator eigenvalue problems.
Multifidelity simulation methodologies are often used in an attempt to judiciously combine low-fidelity and high-fidelity simulation results in an accuracy-increasing, cost-saving way. Candidates for this approach are simulation methodologies for whi ch there are fidelity differences connected with significant computational cost differences. Physics-informed Neural Networks (PINNs) are candidates for these types of approaches due to the significant difference in training times required when different fidelities (expressed in terms of architecture width and depth as well as optimization criteria) are employed. In this paper, we propose a particular multifidelity approach applied to PINNs that exploits low-rank structure. We demonstrate that width, depth, and optimization criteria can be used as parameters related to model fidelity, and show numerical justification of cost differences in training due to fidelity parameter choices. We test our multifidelity scheme on various canonical forward PDE models that have been presented in the emerging PINNs literature.
We propose the use of physics-informed neural networks for solving the shallow-water equations on the sphere. Physics-informed neural networks are trained to satisfy the differential equations along with the prescribed initial and boundary data, and thus can be seen as an alternative approach to solving differential equations compared to traditional numerical approaches such as finite difference, finite volume or spectral methods. We discuss the training difficulties of physics-informed neural networks for the shallow-water equations on the sphere and propose a simple multi-model approach to tackle test cases of comparatively long time intervals. We illustrate the abilities of the method by solving the most prominent test cases proposed by Williamson et al. [J. Comput. Phys. 102, 211-224, 1992].

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا