No Arabic abstract
In this article, a new unified duality theory is developed for Petrov-Galerkin finite element methods. This novel theory is then used to motivate goal-oriented adaptive mesh refinement strategies for use with discontinuous Petrov-Galerkin (DPG) methods. The focus of this article is mainly on broken ultraweak variational formulations of stationary boundary value problems, however, many of the ideas presented within are general enough that they be extended to any such well-posed variational formulation. The proposed goal-oriented adaptive mesh refinement procedures require the construction of refinement indicators for both a primal problem and a dual problem. In the DPG context, the primal problem is simply the system of linear equations coming from a standard DPG method and the dual problem is a similar system of equations, coming from a new method which is dual to DPG. This new method has the same coefficient matrix as the associated DPG method but has a different load. We refer to this new finite element method as a DPG* method. A thorough analysis of DPG* methods, as stand-alone finite element methods, is not given here but will be provided in subsequent articles. For DPG methods, the current theory of a posteriori error estimation is reviewed and the reliability estimate in [13, Theorem 2.1] is improved on. For DPG* methods, three different classes of refinement indicators are derived and several contributions are made towards rigorous a posteriori error estimation. At the closure of the article, results of numerical experiments with Poissons boundary value problem in a three-dimensional domain are provided. These results clearly demonstrate the utility of the goal-oriented adaptive mesh refinement strategies for quantities of interest with either interior or boundary terms.
Block-structured adaptive mesh refinement (AMR) provides the basis for the temporal and spatial discretization strategy for a number of ECP applications in the areas of accelerator design, additive manufacturing, astrophysics, combustion, cosmology, multiphase flow, and wind plant modelling. AMReX is a software framework that provides a unified infrastructure with the functionality needed for these and other AMR applications to be able to effectively and efficiently utilize machines from laptops to exascale architectures. AMR reduces the computational cost and memory footprint compared to a uniform mesh while preserving accurate descriptions of different physical processes in complex multi-physics algorithms. AMReX supports algorithms that solve systems of partial differential equations (PDEs) in simple or complex geometries, and those that use particles and/or particle-mesh operations to represent component physical processes. In this paper, we will discuss the core elements of the AMReX framework such as data containers and iterators as well as several specialized operations to meet the needs of the application projects. In addition we will highlight the strategy that the AMReX team is pursuing to achieve highly performant code across a range of accelerator-based architectures for a variety of different applications.
Large-scale finite element simulations of complex physical systems governed by partial differential equations crucially depend on adaptive mesh refinement (AMR) to allocate computational budget to regions where higher resolution is required. Existing scalable AMR methods make heuristic refinement decisions based on instantaneous error estimation and thus do not aim for long-term optimality over an entire simulation. We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning (RL) to train refinement policies directly from simulation. AMR poses a new problem for RL in that both the state dimension and available action set changes at every step, which we solve by proposing new policy architectures with differing generality and inductive bias. The model sizes of these policy architectures are independent of the mesh size and hence scale to arbitrarily large and complex simulations. We demonstrate in comprehensive experiments on static function estimation and the advection of different fields that RL policies can be competitive with a widely-used error estimator and generalize to larger, more complex, and unseen test problems.
Spatial symmetries and invariances play an important role in the description of materials. When modelling material properties, it is important to be able to respect such invariances. Here we discuss how to model and generate random ensembles of tensors where one wants to be able to prescribe certain classes of spatial symmetries and invariances for the whole ensemble, while at the same time demanding that the mean or expected value of the ensemble be subject to a possibly higher spatial invariance class. Our special interest is in the class of physically symmetric and positive definite tensors, as they appear often in the description of materials. As the set of positive definite tensors is not a linear space, but rather an open convex cone in the linear vector space of physically symmetric tensors, it may be advantageous to widen the notion of mean to the so-called Frechet mean, which is based on distance measures between positive definite tensors other than the usual Euclidean one. For the sake of simplicity, as well as to expose the main idea as clearly as possible, we limit ourselves here to second order tensors. It is shown how the random ensemble can be modelled and generated, with fine control of the spatial symmetry or invariance of the whole ensemble, as well as its Frechet mean, independently in its scaling and directional aspects. As an example, a 2D and a 3D model of steady-state heat conduction in a human proximal femur, a bone with high material anisotropy, is explored. It is modelled with a random thermal conductivity tensor, and the numerical results show the distinct impact of incorporating into the constitutive model different material uncertainties$-$scaling, orientation, and prescribed material symmetry$-$on the desired quantities of interest, such as temperature distribution and heat flux.
We describe an adaptive version of a method for generating valid naturally curved quadrilateral meshes. The method uses a guiding field, derived from the concept of a cross field, to create block decompositions of multiply connected two dimensional domains. The a priori curved quadrilateral blocks can be further split into a finer high-order mesh as needed. The guiding field is computed by a Laplace equation solver using a continuous Galerkin or discontinuous Galerkin spectral element formulation. This operation is aided by using $p$-adaptation to achieve faster convergence of the solution with respect to the computational cost. From the guiding field, irregular nodes and separatrices can be accurately located. A first version of the code is implemented in the open source spectral element framework Nektar++ and its dedicated high order mesh generation platform NekMesh.
A high-order quasi-conservative discontinuous Galerkin (DG) method is proposed for the numerical simulation of compressible multi-component flows. A distinct feature of the method is a predictor-corrector strategy to define the grid velocity. A Lagrangian mesh is first computed based on the flow velocity and then used as an initial mesh in a moving mesh method (the moving mesh partial differential equation or MMPDE method ) to improve its quality. The fluid dynamic equations are discretized in the direct arbitrary Lagrangian-Eulerian framework using DG elements and the non-oscillatory kinetic flux while the species equation is discretized using a quasi-conservative DG scheme to avoid numerical oscillations near material interfaces. A selection of one- and two-dimensional examples are presented to verify the convergence order and the constant-pressure-velocity preservation property of the method. They also demonstrate that the incorporation of the Lagrangian meshing with the MMPDE moving mesh method works well to concentrate mesh points in regions of shocks and material interfaces.