No Arabic abstract
Fourier extension is an approximation method that alleviates the periodicity requirements of Fourier series and avoids the Gibbs phenomenon when approximating functions. We describe a similar extension approach using regular wavelet bases on a hypercube to approximate functions on subsets of that cube. These subsets may have a general shape. This construction is inherently associated with redundancy which leads to severe ill-conditioning, but recent theory shows that nevertheless high accuracy and numerical stability can be achieved using regularization and oversampling. Regularized least squares solvers, such as the truncated singular value decomposition, that are suited to solve the resulting ill-conditioned and skinny linear system generally have cubic computational cost. We compare several algorithms that improve on this complexity. The improvements benefit from the sparsity in and the structure of the discrete wavelet transform. We present a method that requires $mathcal O(N)$ operations in 1-D and $mathcal O(N^{3(d-1)/d})$ in $d$-D, $d>1$. We experimentally show that direct sparse QR solvers appear to be more time-efficient, but yield larger expansion coefficients.
We consider the problem of reconstructing an unknown function $uin L^2(D,mu)$ from its evaluations at given sampling points $x^1,dots,x^min D$, where $Dsubset mathbb R^d$ is a general domain and $mu$ a probability measure. The approximation is picked from a linear space $V_n$ of interest where $n=dim(V_n)$. Recent results have revealed that certain weighted least-squares methods achieve near best approximation with a sampling budget $m$ that is proportional to $n$, up to a logarithmic factor $ln(2n/varepsilon)$, where $varepsilon>0$ is a probability of failure. The sampling points should be picked at random according to a well-chosen probability measure $sigma$ whose density is given by the inverse Christoffel function that depends both on $V_n$ and $mu$. While this approach is greatly facilitated when $D$ and $mu$ have tensor product structure, it becomes problematic for domains $D$ with arbitrary geometry since the optimal measure depends on an orthonormal basis of $V_n$ in $L^2(D,mu)$ which is not explicitly given, even for simple polynomial spaces. Therefore sampling according to this measure is not practically feasible. In this paper, we discuss practical sampling strategies, which amount to using a perturbed measure $widetilde sigma$ that can be computed in an offline stage, not involving the measurement of $u$. We show that near best approximation is attained by the resulting weighted least-squares method at near-optimal sampling budget and we discuss multilevel approaches that preserve optimality of the cumulated sampling budget when the spaces $V_n$ are iteratively enriched. These strategies rely on the knowledge of a-priori upper bounds on the inverse Christoffel function. We establish such bounds for spaces $V_n$ of multivariate algebraic polynomials, and for general domains $D$.
This article is an account of the NABUCO project achieved during the summer camp CEMRACS 2019 devoted to geophysical fluids and gravity flows. The goal is to construct finite difference approximations of the transport equation with nonzero incoming boundary data that achieve the best possible convergence rate in the maximum norm. We construct, implement and analyze the so-called inverse Lax-Wendroff procedure at the incoming boundary. Optimal convergence rates are obtained by combining sharp stability estimates for extrapolation boundary conditions with numerical boundary layer expansions. We illustrate the results with the Lax-Wendroff and O3 schemes.
We propose and analyze a robust BPX preconditioner for the integral fractional Laplacian on bounded Lipschitz domains. For either quasi-uniform grids or graded bisection grids, we show that the condition numbers of the resulting systems remain uniformly bounded with respect to both the number of levels and the fractional power. The results apply also to the spectral and censored fractional Laplacians.
In this paper, we develop efficient and accurate algorithms for evaluating $varphi(A)$ and $varphi(A)b$, where $A$ is an $Ntimes N$ matrix, $b$ is an $N$ dimensional vector and $varphi$ is the function defined by $varphi(x)equivsumlimits^{infty}_{k=0}frac{z^k}{(1+k)!}$. Such matrix function (the so-called $varphi$-function) plays a key role in a class of numerical methods well-known as exponential integrators. The algorithms use the scaling and modified squaring procedure combined with truncated Taylor series. The backward error analysis is presented to find the optimal value of the scaling and the degree of the Taylor approximation. Some useful techniques are employed for reducing the computational cost. Numerical comparisons with state-of-the-art algorithms show that the algorithms perform well in both accuracy and efficiency.
We deal with the virtual element method (VEM) for solving the Poisson equation on a domain $Omega$ with curved boundaries. Given a polygonal approximation $Omega_h$ of the domain $Omega$, the standard order $m$ VEM [6], for $m$ increasing, leads to a suboptimal convergence rate. We adapt the approach of [16] to VEM and we prove that an optimal convergence rate can be achieved by using a suitable correction depending on high order normal derivatives of the discrete solution at the boundary edges of $Omega_h$, which, to retain computability, is evaluated after applying the projector $Pi^ abla$ onto the space of polynomials. Numerical experiments confirm the theory.