Forecasting Hamiltonian dynamics without canonical coordinates


Abstract in English

Conventional neural networks are universal function approximators, but because they are unaware of underlying symmetries or physical laws, they may need impractically many training data to approximate nonlinear dynamics. Recently introduced Hamiltonian neural networks can efficiently learn and forecast dynamical systems that conserve energy, but they require special inputs called canonical coordinates, which may be hard to infer from data. Here we significantly expand the scope of such networks by demonstrating a simple way to train them with any set of generalised coordinates, including easily observable ones.

Download