No Arabic abstract
The automatic selection of an appropriate time step size has been considered extensively in the literature. However, most of the strategies developed operate under the assumption that the computational cost (per time step) is independent of the step size. This assumption is reasonable for non-stiff ordinary differential equations and for partial differential equations where the linear systems of equations resulting from an implicit integrator are solved by direct methods. It is, however, usually not satisfied if iterative (for example, Krylov) methods are used. In this paper, we propose a step size selection strategy that adaptively reduces the computational cost (per unit time step) as the simulation progresses, constraint by the tolerance specified. We show that the proposed approach yields significant improvements in performance for a range of problems (diffusion-advection equation, Burgers equation with a reaction term, porous media equation, viscous Burgers equation, Allen--Cahn equation, and the two-dimensional Brusselator system). While traditional step size controllers have emphasized a smooth sequence of time step sizes, we emphasize the exploration of different step sizes which necessitates relatively rapid changes in the step size.
We construct a family of embedded pairs for optimal strong stability preserving explicit Runge-Kutta methods of order $2 leq p leq 4$ to be used to obtain numerical solution of spatially discretized hyperbolic PDEs. In this construction, the goals include non-defective methods, large region of absolute stability, and optimal error measurement as defined in [5,19]. The new family of embedded pairs offer the ability for strong stability preserving (SSP) methods to adapt by varying the step-size based on the local error estimation while maintaining their inherent nonlinear stability properties. Through several numerical experiments, we assess the overall effectiveness in terms of precision versus work while also taking into consideration accuracy and stability.
We present a new class of iterative schemes for solving initial value problems (IVP) based on discontinuous Galerkin (DG) methods. Starting from the weak DG formulation of an IVP, we derive a new iterative method based on a preconditioned Picard iteration. Using this approach, we can systematically construct explicit, implicit and semi-implicit schemes with arbitrary order of accuracy. We also show that the same schemes can be constructed by solving a series of correction equations based on the DG weak formulation. The accuracy of the schemes is proven to be $min{2p+1, K+1}$ with $p$ the degree of the DG polynomial basis and $K$ the number of iterations. The stability is explored numerically; we show that the implicit schemes are $A$-stable at least for $0 leq p leq 9$. Furthermore, we combine the methods with a multilevel strategy to accelerate their convergence speed. The new multilevel scheme is intended to provide a flexible framework for high order space-time discretizations and to be coupled with space-time multigrid techniques for solving partial differential equations (PDEs). We present numerical examples for ODEs and PDEs to analyze the performance of the new methods. Moreover, the newly proposed class of methods, due to its structure, is also a competitive and promising candidate for parallel in time algorithms such as Parareal, PFASST, multigrid in time, etc.
This work considers multirate generalized-structure additively partitioned Runge-Kutta (MrGARK) methods for solving stiff systems of ordinary differential equations (ODEs) with multiple time scales. These methods treat different partitions of the system with different timesteps for a more targeted and efficient solution compared to monolithic single rate approaches. With implicit methods used across all partitions, methods must find a balance between stability and the cost of solving nonlinear equations for the stages. In order to characterize this important trade-off, we explore multirate coupling strategies, problems for assessing linear stability, and techniques to efficiently implement Newton iterations for stage equations. Unlike much of the existing multirate stability analysis which is limited in scope to particular methods, we present general statements on stability and describe fundamental limitations for certain types of multirate schemes. New implicit multirate methods up to fourth order are derived, and their accuracy and efficiency properties are verified with numerical tests.
Time integration methods for solving initial value problems are an important component of many scientific and engineering simulations. Implicit time integrators are desirable for their stability properties, significantly relaxing restrictions on timestep size. However, implicit methods require solutions to one or more systems of nonlinear equations at each timestep, which for large simulations can be prohibitively expensive. This paper introduces a new family of linearly implicit multistep methods (LIMM), which only requires the solution of one linear system per timestep. Order conditions and stability theory for these methods are presented, as well as design and implementation considerations. Practical methods of order up to five are developed that have similar error coefficients, but improved stability regions, when compared to the widely used BDF methods. Numerical testing of a self-starting variable stepsize and variable order implementation of the new LIMM methods shows measurable performance improvement over a similar BDF implementation.
We examine some numerical iterative methods for computing the eigenvalues and eigenvectors of real matrices. The five methods examined here range from the simple power iteration method to the more complicated QR iteration method. The derivations, procedure, and advantages of each method are briefly discussed.