Do you want to publish a course? Click here

On convergence of higher order schemes for the projective integration method for stiff ordinary differential equations

142   0   0.0 ( 0 )
 Added by John Maclean
 Publication date 2015
  fields
and research's language is English




Ask ChatGPT about the research

We present a convergence proof for higher order implementations of the projective integration method (PI) for a class of deterministic multi-scale systems in which fast variables quickly settle on a slow manifold. The error is shown to contain contributions associated with the length of the microsolver, the numerical accuracy of the macrosolver and the distance from the slow manifold caused by the combined effect of micro- and macrosolvers, respectively. We also provide stability conditions for the PI methods under which the fast variables will not diverge from the slow manifold. We corroborate our results by numerical simulations.



rate research

Read More

We present a convergence proof of the projective integration method for a class of deterministic multi-dimensional multi-scale systems which are amenable to centre manifold theory. The error is shown to contain contributions associated with the numerical accuracy of the microsolver, the numerical accuracy of the macrosolver and the distance from the centre manifold caused by the combined effect of micro- and macrosolvers, respectively. We corroborate our results by numerical simulations.
99 - Suyong Kim , Weiqi Ji , Sili Deng 2021
Neural Ordinary Differential Equations (ODE) are a promising approach to learn dynamic models from time-series data in science and engineering applications. This work aims at learning Neural ODE for stiff systems, which are usually raised from chemical kinetic modeling in chemical and biological systems. We first show the challenges of learning neural ODE in the classical stiff ODE systems of Robertsons problem and propose techniques to mitigate the challenges associated with scale separations in stiff systems. We then present successful demonstrations in stiff systems of Robertsons problem and an air pollution problem. The demonstrations show that the usage of deep networks with rectified activations, proper scaling of the network outputs as well as loss functions, and stabilized gradient calculations are the key techniques enabling the learning of stiff neural ODE. The success of learning stiff neural ODE opens up possibilities of using neural ODEs in applications with widely varying time-scales, like chemical dynamics in energy conversion, environmental engineering, and the life sciences.
104 - Zeyu Jin , Ruo Li 2021
We propose a high order numerical homogenization method for dissipative ordinary differential equations (ODEs) containing two time scales. Essentially, only first order homogenized model globally in time can be derived. To achieve a high order method, we have to adopt a numerical approach in the framework of the heterogeneous multiscale method (HMM). By a successively refined microscopic solver, the accuracy improvement up to arbitrary order is attained providing input data smooth enough. Based on the formulation of the high order microscopic solver we derived, an iterative formula to calculate the microscopic solver is then proposed. Using the iterative formula, we develop an implementation to the method in an efficient way for practical applications. Several numerical examples are presented to validate the new models and numerical methods.
In this paper, we propose third-order semi-discretized schemes in space based on the tempered weighted and shifted Grunwald difference (tempered-WSGD) operators for the tempered fractional diffusion equation. We also show stability and convergence analysis for the fully discrete scheme based a Crank--Nicolson scheme in time. A third-order scheme for the tempered Black--Scholes equation is also proposed and tested numerically. Some numerical experiments are carried out to confirm accuracy and effectiveness of these proposed methods.
We propose a numerical integrator for determining low-rank approximations to solutions of large-scale matrix differential equations. The considered differential equations are semilinear and stiff. Our method consists of first splitting the differential equation into a stiff and a non-stiff part, respectively, and then following a dynamical low-rank approach. We conduct an error analysis of the proposed procedure, which is independent of the stiffness and robust with respect to possibly small singular values in the approximation matrix. Following the proposed method, we show how to obtain low-rank approximations for differential Lyapunov and for differential Riccati equations. Our theory is illustrated by numerical experiments.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا