No Arabic abstract
In recent years, contour-based eigensolvers have emerged as a standard approach for the solution of large and sparse eigenvalue problems. Building upon recent performance improvements through non-linear least square optimization of so-called rational filters, we introduce a systematic method to design these filters by minimizing the worst-case convergence ratio and eliminate the parametric dependence on weight functions. Further, we provide an efficient way to deal with the box-constraints which play a central role for the use of iterative linear solvers in contour-based eigensolvers. Indeed, these parameter-free filters consistently minimize the number of iterations and the number of FLOPs to reach convergence in the eigensolver. As a byproduct, our rational filters allow for a simple solution to load balancing when the solution of an interior eigenproblem is approached by the slicing of the sought after spectral interval.
This paper presents an enhanced version of our previous work, hybrid non-uniform subdivision surfaces [19], to achieve optimal convergence rates in isogeometric analysis. We introduce a parameter $lambda$ ($frac{1}{4}<lambda<1$) to control the rate of shrinkage of irregular regions, so the method is called tuned hybrid non-uniform subdivision (tHNUS). Our previous work corresponds to the case when $lambda=frac{1}{2}$. While introducing $lambda$ in hybrid subdivision significantly complicates the theoretical proof of $G^1$ continuity around extraordinary vertices, reducing $lambda$ can recover the optimal convergence rates when tuned hybrid subdivision functions are used as a basis in isogeometric analysis. From the geometric point of view, the tHNUS retains comparable shape quality as [19] under non-uniform parameterization. Its basis functions are refinable and the geometric mapping stays invariant during refinement. Moreover, we prove that a tuned hybrid subdivision surface is globally $G^1$-continuous. From the analysis point of view, tHNUS basis functions form a non-negative partition of unity, are globally linearly independent, and their spline spaces are nested. We numerically demonstrate that tHNUS basis functions can achieve optimal convergence rates for the Poissons problem with non-uniform parameterization around extraordinary vertices.
Using deep neural networks to solve PDEs has attracted a lot of attentions recently. However, why the deep learning method works is falling far behind its empirical success. In this paper, we provide a rigorous numerical analysis on deep Ritz method (DRM) cite{wan11} for second order elliptic equations with Neumann boundary conditions. We establish the first nonasymptotic convergence rate in $H^1$ norm for DRM using deep networks with $mathrm{ReLU}^2$ activation functions. In addition to providing a theoretical justification of DRM, our study also shed light on how to set the hyper-parameter of depth and width to achieve the desired convergence rate in terms of number of training samples. Technically, we derive bounds on the approximation error of deep $mathrm{ReLU}^2$ network in $H^1$ norm and on the Rademacher complexity of the non-Lipschitz composition of gradient norm and $mathrm{ReLU}^2$ network, both of which are of independent interest.
The Gaver-Stehfest algorithm is widely used for numerical inversion of Laplace transform. In this paper we provide the first rigorous study of the rate of convergence of the Gaver-Stehfest algorithm. We prove that Gaver-Stehfest approximations converge exponentially fast if the target function is analytic in a neighbourhood of a point and they converge at a rate $o(n^{-k})$ if the target function is $(2k+3)$-times differentiable at a point.
In this paper, we examine the effectiveness of classic multiscale finite element method (MsFEM) (Hou and Wu, 1997; Hou et al., 1999) for mixed Dirichlet-Neumann, Robin and hemivariational inequality boundary problems. Constructing so-called boundary correctors is a common technique in existing methods to prove the convergence rate of MsFEM, while we think not reflects the essence of those problems. Instead, we focus on the first-order expansion structure. Through recently developed estimations in homogenization theory, our convergence rate is provided with milder assumptions and in neat forms.
We consider the convergence of adaptive BEM for weakly-singular and hypersingular integral equations associated with the Laplacian and the Helmholtz operator in 2D and 3D. The local mesh-refinement is driven by some two-level error estimator. We show that the adaptive algorithm drives the underlying error estimates to zero. Moreover, we prove that the saturation assumption already implies linear convergence of the error with optimal algebraic rates.