Do you want to publish a course? Click here

Inverse modified differential equations for discovery of dynamics

96   0   0.0 ( 0 )
 Added by Aiqing Zhu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The combination of numerical integration and deep learning, i.e., ODE-net, has been successfully employed in a variety of applications. In this work, we introduce inverse modified differential equations (IMDE) to contribute to the behaviour and error analysis of discovery of dynamics using ODE-net. It is shown that the difference between the learned ODE and the truncated IMDE is bounded by the sum of learning loss and a discrepancy which can be made sub exponentially small. In addition, we deduce that the total error of ODE-net is bounded by the sum of discrete error and learning loss. Furthermore, with the help of IMDE, theoretical results on learning Hamiltonian system are derived. Several experiments are performed to numerically verify our theoretical results.

rate research

Read More

The numerical solution of differential equations can be formulated as an inference problem to which formal statistical approaches can be applied. However, nonlinear partial differential equations (PDEs) pose substantial challenges from an inferential perspective, most notably the absence of explicit conditioning formula. This paper extends earlier work on linear PDEs to a general class of initial value problems specified by nonlinear PDEs, motivated by problems for which evaluations of the right-hand-side, initial conditions, or boundary conditions of the PDE have a high computational cost. The proposed method can be viewed as exact Bayesian inference under an approximate likelihood, which is based on discretisation of the nonlinear differential operator. Proof-of-concept experimental results demonstrate that meaningful probabilistic uncertainty quantification for the unknown solution of the PDE can be performed, while controlling the number of times the right-hand-side, initial and boundary conditions are evaluated. A suitable prior model for the solution of the PDE is identified using novel theoretical analysis of the sample path properties of Mat{e}rn processes, which may be of independent interest.
Solving general high-dimensional partial differential equations (PDE) is a long-standing challenge in numerical mathematics. In this paper, we propose a novel approach to solve high-dimensional linear and nonlinear PDEs defined on arbitrary domains by leveraging their weak formulations. We convert the problem of finding the weak solution of PDEs into an operator norm minimization problem induced from the weak formulation. The weak solution and the test function in the weak formulation are then parameterized as the primal and adversarial networks respectively, which are alternately updated to approximate the optimal network parameter setting. Our approach, termed as the weak adversarial network (WAN), is fast, stable, and completely mesh-free, which is particularly suitable for high-dimensional PDEs defined on irregular domains where the classical numerical methods based on finite differences and finite elements suffer the issues of slow computation, instability and the curse of dimensionality. We apply our method to a variety of test problems with high-dimensional PDEs to demonstrate its promising performance.
104 - Zeyu Jin , Ruo Li 2021
We propose a high order numerical homogenization method for dissipative ordinary differential equations (ODEs) containing two time scales. Essentially, only first order homogenized model globally in time can be derived. To achieve a high order method, we have to adopt a numerical approach in the framework of the heterogeneous multiscale method (HMM). By a successively refined microscopic solver, the accuracy improvement up to arbitrary order is attained providing input data smooth enough. Based on the formulation of the high order microscopic solver we derived, an iterative formula to calculate the microscopic solver is then proposed. Using the iterative formula, we develop an implementation to the method in an efficient way for practical applications. Several numerical examples are presented to validate the new models and numerical methods.
We develop in this work a numerical method for stochastic differential equations (SDEs) with weak second order accuracy based on Gaussian mixture. Unlike the conventional higher order schemes for SDEs based on It^o-Taylor expansion and iterated It^o integrals, the proposed scheme approximates the probability measure $mu(X^{n+1}|X^n=x_n)$ by a mixture of Gaussians. The solution at next time step $X^{n+1}$ is then drawn from the Gaussian mixture with complexity linear in the dimension $d$. This provides a new general strategy to construct efficient high weak order numerical schemes for SDEs.
222 - Yiqi Gu , Haizhao Yang , Chao Zhou 2020
The least squares method with deep neural networks as function parametrization has been applied to solve certain high-dimensional partial differential equations (PDEs) successfully; however, its convergence is slow and might not be guaranteed even within a simple class of PDEs. To improve the convergence of the network-based least squares model, we introduce a novel self-paced learning framework, SelectNet, which quantifies the difficulty of training samples, treats samples equally in the early stage of training, and slowly explores more challenging samples, e.g., samples with larger residual errors, mimicking the human cognitive process for more efficient learning. In particular, a selection network and the PDE solution network are trained simultaneously; the selection network adaptively weighting the training samples of the solution network achieving the goal of self-paced learning. Numerical examples indicate that the proposed SelectNet model outperforms existing models on the convergence speed and the convergence robustness, especially for low-regularity solutions.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا