ترغب بنشر مسار تعليمي؟ اضغط هنا

SympNets: Intrinsic structure-preserving symplectic networks for identifying Hamiltonian systems

78   0   0.0 ( 0 )
 نشر من قبل Pengzhan Jin
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose new symplectic networks (SympNets) for identifying Hamiltonian systems from data based on a composition of linear, activation and gradient modules. In particular, we define two classes of SympNets: the LA-SympNets composed of linear and activation modules, and the G-SympNets composed of gradient modules. Correspondingly, we prove two new universal approximation theorems that demonstrate that SympNets can approximate arbitrary symplectic maps based on appropriate activation functions. We then perform several experiments including the pendulum, double pendulum and three-body problems to investigate the expressivity and the generalization ability of SympNets. The simulation results show that even very small size SympNets can generalize well, and are able to handle both separable and non-separable Hamiltonian systems with data points resulting from short or long time steps. In all the test cases, SympNets outperform the baseline models, and are much faster in training and prediction. We also develop an extended version of SympNets to learn the dynamics from irregularly sampled data. This extended version of SympNets can be thought of as a universal model representing the solution to an arbitrary Hamiltonian system.

قيم البحث

اقرأ أيضاً

We propose an effective and lightweight learning algorithm, Symplectic Taylor Neural Networks (Taylor-nets), to conduct continuous, long-term predictions of a complex Hamiltonian dynamic system based on sparse, short-term observations. At the heart o f our algorithm is a novel neural network architecture consisting of two sub-networks. Both are embedded with terms in the form of Taylor series expansion designed with symmetric structure. The key mechanism underpinning our infrastructure is the strong expressiveness and special symmetric property of the Taylor series expansion, which naturally accommodate the numerical fitting process of the gradients of the Hamiltonian with respect to the generalized coordinates as well as preserve its symplectic structure. We further incorporate a fourth-order symplectic integrator in conjunction with neural ODEs framework into our Taylor-net architecture to learn the continuous-time evolution of the target systems while simultaneously preserving their symplectic structures. We demonstrated the efficacy of our Taylor-net in predicting a broad spectrum of Hamiltonian dynamic systems, including the pendulum, the Lotka--Volterra, the Kepler, and the Henon--Heiles systems. Our model exhibits unique computational merits by outperforming previous methods to a great extent regarding the prediction accuracy, the convergence rate, and the robustness despite using extremely small training data with a short training period (6000 times shorter than the predicting period), small sample sizes, and no intermediate data to train the networks.
We introduce an approach for imposing physically informed inductive biases in learned simulation models. We combine graph networks with a differentiable ordinary differential equation integrator as a mechanism for predicting future states, and a Hami ltonian as an internal representation. We find that our approach outperforms baselines without these biases in terms of predictive accuracy, energy accuracy, and zero-shot generalization to time-step sizes and integrator orders not experienced during training. This advances the state-of-the-art of learned simulation, and in principle is applicable beyond physical domains.
Generalized Additive Runge-Kutta schemes have shown to be a suitable tool for solving ordinary differential equations with additively partitioned right-hand sides. This work generalizes these GARK schemes to symplectic GARK schemes for additively par titioned Hamiltonian systems. In a general setting, we derive conditions for symplecticeness, as well as symmetry and time-reversibility. We show how symplectic and symmetric schemes can be constructed based on schemes which are only symplectic. Special attention is given to the special case of partitioned schemes for Hamiltonians split into multiple potential and kinetic energies. Finally we show how symplectic GARK schemes can use efficiently different time scales and evaluation costs for different potentials by using different order for these parts.
Reduced basis methods are popular for approximately solving large and complex systems of differential equations. However, conventional reduced basis methods do not generally preserve conservation laws and symmetries of the full order model. Here, we present an approach for reduced model construction, that preserves the symplectic symmetry of dissipative Hamiltonian systems. The method constructs a closed reduced Hamiltonian system by coupling the full model with a canonical heat bath. This allows the reduced system to be integrated with a symplectic integrator, resulting in a correct dissipation of energy, preservation of the total energy and, ultimately, in the stability of the solution. Accuracy and stability of the method are illustrated through the numerical simulation of the dissipative wave equation and a port-Hamiltonian model of an electric circuit.
While reduced-order models (ROMs) have been popular for efficiently solving large systems of differential equations, the stability of reduced models over long-time integration is of present challenges. We present a greedy approach for ROM generation of parametric Hamiltonian systems that captures the symplectic structure of Hamiltonian systems to ensure stability of the reduced model. Through the greedy selection of basis vectors, two new vectors are added at each iteration to the linear vector space to increase the accuracy of the reduced basis. We use the error in the Hamiltonian due to model reduction as an error indicator to search the parameter space and identify the next best basis vectors. Under natural assumptions on the set of all solutions of the Hamiltonian system under variation of the parameters, we show that the greedy algorithm converges with exponential rate. Moreover, we demonstrate that combining the greedy basis with the discrete empirical interpolation method also preserves the symplectic structure. This enables the reduction of the computational cost for nonlinear Hamiltonian systems. The efficiency, accuracy, and stability of this model reduction technique is illustrated through simulations of the parametric wave equation and the parametric Schrodinger equation.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا