Do you want to publish a course? Click here

From efficient symplectic exponentiation of matrices to symplectic integration of high-dimensional Hamiltonian systems with slowly varying quadratic stiff potentials

89   0   0.0 ( 0 )
 Added by Molei Tao
 Publication date 2010
  fields
and research's language is English




Ask ChatGPT about the research

We present a multiscale integrator for Hamiltonian systems with slowly varying quadratic stiff potentials that uses coarse timesteps (analogous to what the impulse method uses for constant quadratic stiff potentials). This method is based on the highly-non-trivial introduction of two efficient symplectic schemes for exponentiations of matrices that only require O(n) matrix multiplications operations at each coarse time step for a preset small number n. The proposed integrator is shown to be (i) uniformly convergent on positions; (ii) symplectic in both slow and fast variables; (iii) well adapted to high dimensional systems. Our framework also provides a general method for iteratively exponentiating a slowly varying sequence of (possibly high dimensional) matrices in an efficient way.



rate research

Read More

90 - Molei Tao , Houman Owhadi , 2009
We introduce a new class of integrators for stiff ODEs as well as SDEs. These integrators are (i) {it Multiscale}: they are based on flow averaging and so do not fully resolve the fast variables and have a computational cost determined by slow variables (ii) {it Versatile}: the method is based on averaging the flows of the given dynamical system (which may have hidden slow and fast processes) instead of averaging the instantaneous drift of assumed separated slow and fast processes. This bypasses the need for identifying explicitly (or numerically) the slow or fast variables (iii) {it Nonintrusive}: A pre-existing numerical scheme resolving the microscopic time scale can be used as a black box and easily turned into one of the integrators in this paper by turning the large coefficients on over a microscopic timescale and off during a mesoscopic timescale (iv) {it Convergent over two scales}: strongly over slow processes and in the sense of measures over fast ones. We introduce the related notion of two-scale flow convergence and analyze the convergence of these integrators under the induced topology (v) {it Structure preserving}: for stiff Hamiltonian systems (possibly on manifolds), they can be made to be symplectic, time-reversible, and symmetry preserving (symmetries are group actions that leave the system invariant) in all variables. They are explicit and applicable to arbitrary stiff potentials (that need not be quadratic). Their application to the Fermi-Pasta-Ulam problems shows accuracy and stability over four orders of magnitude of time scales. For stiff Langevin equations, they are symmetry preserving, time-reversible and Boltzmann-Gibbs reversible, quasi-symplectic on all variables and conformally symplectic with isotropic friction.
Generalized Additive Runge-Kutta schemes have shown to be a suitable tool for solving ordinary differential equations with additively partitioned right-hand sides. This work generalizes these GARK schemes to symplectic GARK schemes for additively partitioned Hamiltonian systems. In a general setting, we derive conditions for symplecticeness, as well as symmetry and time-reversibility. We show how symplectic and symmetric schemes can be constructed based on schemes which are only symplectic. Special attention is given to the special case of partitioned schemes for Hamiltonians split into multiple potential and kinetic energies. Finally we show how symplectic GARK schemes can use efficiently different time scales and evaluation costs for different potentials by using different order for these parts.
We propose an effective and lightweight learning algorithm, Symplectic Taylor Neural Networks (Taylor-nets), to conduct continuous, long-term predictions of a complex Hamiltonian dynamic system based on sparse, short-term observations. At the heart of our algorithm is a novel neural network architecture consisting of two sub-networks. Both are embedded with terms in the form of Taylor series expansion designed with symmetric structure. The key mechanism underpinning our infrastructure is the strong expressiveness and special symmetric property of the Taylor series expansion, which naturally accommodate the numerical fitting process of the gradients of the Hamiltonian with respect to the generalized coordinates as well as preserve its symplectic structure. We further incorporate a fourth-order symplectic integrator in conjunction with neural ODEs framework into our Taylor-net architecture to learn the continuous-time evolution of the target systems while simultaneously preserving their symplectic structures. We demonstrated the efficacy of our Taylor-net in predicting a broad spectrum of Hamiltonian dynamic systems, including the pendulum, the Lotka--Volterra, the Kepler, and the Henon--Heiles systems. Our model exhibits unique computational merits by outperforming previous methods to a great extent regarding the prediction accuracy, the convergence rate, and the robustness despite using extremely small training data with a short training period (6000 times shorter than the predicting period), small sample sizes, and no intermediate data to train the networks.
In this paper, our goal is to study the regular reduction theory of regular controlled Hamiltonian (RCH) systems with symplectic structure and symmetry, and this reduction is an extension of regular symplectic reduction theory of Hamiltonian systems under regular controlled Hamiltonian equivalence conditions. Thus, in order to describe uniformly RCH systems defined on a cotangent bundle and on the regular reduced spaces, we first define a kind of RCH systems on a symplectic fiber bundle. Then introduce regular point and regular orbit reducible RCH systems with symmetry by using momentum map and the associated reduced symplectic forms. Moreover, we give regular point and regular orbit reduction theorems for RCH systems to explain the relationships between RpCH-equivalence, RoCH-equivalence for reducible RCH systems with symmetry and RCH-equivalence for associated reduced RCH systems. Finally, as an application we regard rigid body and heavy top as well as them with internal rotors as the regular point reducible RCH systems on the rotation group $textmd{SO}(3)$ and on the Euclidean group $textmd{SE}(3)$,as well as on their generalizations, respectively, and discuss their RCH-equivalence. We also describe the RCH system and RCH-equivalence from the viewpoint of port Hamiltonian system with a symplectic structure.
HNets is a class of neural networks on grounds of physical prior for learning Hamiltonian systems. This paper explains the influences of different integrators as hyper-parameters on the HNets through error analysis. If we define the network target as the map with zero empirical loss on arbitrary training data, then the non-symplectic integrators cannot guarantee the existence of the network targets of HNets. We introduce the inverse modified equations for HNets and prove that the HNets based on symplectic integrators possess network targets and the differences between the network targets and the original Hamiltonians depend on the accuracy orders of the integrators. Our numerical experiments show that the phase flows of the Hamiltonian systems obtained by symplectic HNets do not exactly preserve the original Hamiltonians, but preserve the network targets calculated; the loss of the network target for the training data and the test data is much less than the loss of the original Hamiltonian; the symplectic HNets have more powerful generalization ability and higher accuracy than the non-symplectic HNets in addressing predicting issues. Thus, the symplectic integrators are of critical importance for HNets.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا