No Arabic abstract
Discrete variational methods have shown an excellent performance in numerical simulations of different mechanical systems. In this paper, we introduce an iterative method for discrete variational methods appropriate for boundary value problems. More concretely, we explore a parallelization strategy that leverages the power of multicore CPUs and GPUs (graphics cards). We study this parallel method for first-order and second-order Lagrangians and we illustrate its excellent behavior in some interesting applications, namely Zermelos navigation problem, a fuel-optimal navigation problem, and an interpolation problem.
We consider the null controllability problem for the wave equation, and analyse a stabilized finite element method formulated on a global, unstructured spacetime mesh. We prove error estimates for the approximate control given by the computational method. The proofs are based on the regularity properties of the control given by the Hilbert Uniqueness Method, together with the stability properties of the numerical scheme. Numerical experiments illustrate the results.
Algebraic models for the reconstruction problem in X-ray computed tomography (CT) provide a flexible framework that applies to many measurement geometries. For large-scale problems we need to use iterative solvers, and we need stopping rules for these methods that terminate the iterations when we have computed a satisfactory reconstruction that balances the reconstruction error and the influence of noise from the measurements. Many such stopping rules are developed in the inverse problems communities, but they have not attained much attention in the CT world. The goal of this paper is to describe and illustrate four stopping rules that are relevant for CT reconstructions.
This paper is concerned with a novel deep learning method for variational problems with essential boundary conditions. To this end, we first reformulate the original problem into a minimax problem corresponding to a feasible augmented Lagrangian, which can be solved by the augmented Lagrangian method in an infinite dimensional setting. Based on this, by expressing the primal and dual variables with two individual deep neural network functions, we present an augmented Lagrangian deep learning method for which the parameters are trained by the stochastic optimization method together with a projection technique. Compared to the traditional penalty method, the new method admits two main advantages: i) the choice of the penalty parameter is flexible and robust, and ii) the numerical solution is more accurate in the same magnitude of computational cost. As typical applications, we apply the new approach to solve elliptic problems and (nonlinear) eigenvalue problems with essential boundary conditions, and numerical experiments are presented to show the effectiveness of the new method.
In this paper we proposed two new quasi-boundary value methods for regularizing the ill-posed backward heat conduction problems. With a standard finite difference discretization in space and time, the obtained all-at-once nonsymmetric sparse linear systems have the desired block $omega$-circulant structure, which can be utilized to design an efficient parallel-in-time (PinT) direct solver that built upon an explicit FFT-based diagonalization of the time discretization matrix. Convergence analysis is presented to justify the optimal choice of the regularization parameter. Numerical examples are reported to validate our analysis and illustrate the superior computational efficiency of our proposed PinT methods.
Time integration methods for solving initial value problems are an important component of many scientific and engineering simulations. Implicit time integrators are desirable for their stability properties, significantly relaxing restrictions on timestep size. However, implicit methods require solutions to one or more systems of nonlinear equations at each timestep, which for large simulations can be prohibitively expensive. This paper introduces a new family of linearly implicit multistep methods (LIMM), which only requires the solution of one linear system per timestep. Order conditions and stability theory for these methods are presented, as well as design and implementation considerations. Practical methods of order up to five are developed that have similar error coefficients, but improved stability regions, when compared to the widely used BDF methods. Numerical testing of a self-starting variable stepsize and variable order implementation of the new LIMM methods shows measurable performance improvement over a similar BDF implementation.