No Arabic abstract
This work presents the windowed space-time least-squares Petrov-Galerkin method (WST-LSPG) for model reduction of nonlinear parameterized dynamical systems. WST-LSPG is a generalization of the space-time least-squares Petrov-Galerkin method (ST-LSPG). The main drawback of ST-LSPG is that it requires solving a dense space-time system with a space-time basis that is calculated over the entire global time domain, which can be unfeasible for large-scale applications. Instead of using a temporally-global space-time trial subspace and minimizing the discrete-in-time full-order model (FOM) residual over an entire time domain, the proposed WST-LSPG approach addresses this weakness by (1) dividing the time simulation into time windows, (2) devising a unique low-dimensional space-time trial subspace for each window, and (3) minimizing the discrete-in-time space-time residual of the dynamical system over each window. This formulation yields a problem with coupling confined within each window, but sequential across the windows. To enable high-fidelity trial subspaces characterized by a relatively minimal number of basis vectors, this work proposes constructing space-time bases using tensor decompositions for each window. WST-LSPG is equipped with hyper-reduction techniques to further reduce the computational cost. Numerical experiments for the one-dimensional Burgers equation and the two-dimensional compressible Navier-Stokes equations for flow over a NACA 0012 airfoil demonstrate that WST-LSPG is superior to ST-LSPG in terms of accuracy and computational gain.
We present a Petrov-Gelerkin (PG) method for a class of nonlocal convection-dominated diffusion problems. There are two main ingredients in our approach. First, we define the norm on the test space as induced by the trial space norm, i.e., the optimal test norm, so that the inf-sup condition can be satisfied uniformly independent of the problem. We show the well-posedness of a class of nonlocal convection-dominated diffusion problems under the optimal test norm with general assumptions on the nonlocal diffusion and convection kernels. Second, following the framework of Cohen et al.~(2012), we embed the original nonlocal convection-dominated diffusion problem into a larger mixed problem so as to choose an enriched test space as a stabilization of the numerical algorithm. In the numerical experiments, we use an approximate optimal test norm which can be efficiently implemented in 1d, and study its performance against the energy norm on the test space. We conduct convergence studies for the nonlocal problem using uniform $h$- and $p$-refinements, and adaptive $h$-refinements on both smooth manufactured solutions and solutions with sharp gradient in a transition layer. In addition, we confirm that the PG method is asymptotically compatible.
We consider best approximation problems in a nonlinear subset $mathcal{M}$ of a Banach space of functions $(mathcal{V},|bullet|)$. The norm is assumed to be a generalization of the $L^2$-norm for which only a weighted Monte Carlo estimate $|bullet|_n$ can be computed. The objective is to obtain an approximation $vinmathcal{M}$ of an unknown function $u in mathcal{V}$ by minimizing the empirical norm $|u-v|_n$. We consider this problem for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and is independent of the nonlinear least squares setting. Several model classes are examined where analytical statements can be made about the RIP and the results are compared to existing sample complexity bounds from the literature. We find that for well-studied model classes our general bound is weaker but exhibits many of the same properties as these specialized bounds. Notably, we demonstrate the advantage of an optimal sampling density (as known for linear spaces) for sets of functions with sparse representations.
There are plenty of applications and analysis for time-independent elliptic partial differential equations in the literature hinting at the benefits of overtesting by using more collocation conditions than the number of basis functions. Overtesting not only reduces the problem size, but is also known to be necessary for stability and convergence of widely used unsymmetric Kansa-type strong-form collocation methods. We consider kernel-based meshfree methods, which is a method of lines with collocation and overtesting spatially, for solving parabolic partial differential equations on surfaces without parametrization. In this paper, we extend the time-independent convergence theories for overtesting techniques to the parabolic equations on smooth and closed surfaces.
Consider using the right-preconditioned generalized minimal residual (AB-GMRES) method, which is an efficient method for solving underdetermined least squares problems. Morikuni (Ph.D. thesis, 2013) showed that for some inconsistent and ill-conditioned problems, the iterates of the AB-GMRES method may diverge. This is mainly because the Hessenberg matrix in the GMRES method becomes very ill-conditioned so that the backward substitution of the resulting triangular system becomes numerically unstable. We propose a stabilized GMRES based on solving the normal equations corresponding to the above triangular system using the standard Cholesky decomposition. This has the effect of shifting upwards the tiny singular values of the Hessenberg matrix which lead to an inaccurate solution. Thus, the process becomes numerically stable and the system becomes consistent, rendering better convergence and a more accurate solution. Numerical experiments show that the proposed method is robust and efficient for solving inconsistent and ill-conditioned underdetermined least squares problems. The method can be considered as a way of making the GMRES stable for highly ill-conditioned inconsistent problems.
We propose a general --- i.e., independent of the underlying equation --- registration method for parameterized Model Order Reduction. Given the spatial domain $Omega subset mathbb{R}^d$ and a set of snapshots ${ u^k }_{k=1}^{n_{rm train}}$ over $Omega$ associated with $n_{rm train}$ values of the model parameters $mu^1,ldots, mu^{n_{rm train}} in mathcal{P}$, the algorithm returns a parameter-dependent bijective mapping $boldsymbol{Phi}: Omega times mathcal{P} to mathbb{R}^d$: the mapping is designed to make the mapped manifold ${ u_{mu} circ boldsymbol{Phi}_{mu}: , mu in mathcal{P} }$ more suited for linear compression methods. We apply the registration procedure, in combination with a linear compression method, to devise low-dimensional representations of solution manifolds with slowly-decaying Kolmogorov $N$-widths; we also consider the application to problems in parameterized geometries. We present a theoretical result to show the mathematical rigor of the registration procedure. We further present numerical results for several two-dimensional problems, to empirically demonstrate the effectivity of our proposal.