No Arabic abstract
Regula Falsi, or the method of false position, is a numerical method for finding an approximate solution to f(x) = 0 on a finite interval [a, b], where f is a real-valued continuous function on [a, b] and satisfies f(a)f(b) < 0. Previous studies proved the convergence of this method under certain assumptions about the function f, such as both the first and second derivatives of f do not change the sign on the interval [a, b]. In this paper, we remove those assumptions and prove the convergence of the method for all continuous functions.
Boundary integral numerical methods are among the most accurate methods for interfacial Stokes flow, and are widely applied. They have the advantage that only the boundary of the domain must be discretized, which reduces the number of discretization points and allows the treatment of complicated interfaces. Despite their popularity, there is no analysis of the convergence of these methods for interfacial Stokes flow. In practice, the stability of discretizations of the boundary integral formulation can depend sensitively on details of the discretization and on the application of numerical filters. We present a convergence analysis of the boundary integral method for Stokes flow, focusing on a rather general method for computing the evolution of an elastic capsule, viscous drop, or inviscid bubble in 2D strain and shear flows. The analysis clarifies the role of numerical filters in practical computations.
The randomized Gauss--Seidel method and its extension have attracted much attention recently and their convergence rates have been considered extensively. However, the convergence rates are usually determined by upper bounds, which cannot fully reflect the actual convergence. In this paper, we make a detailed analysis of their convergence behaviors. The analysis shows that the larger the singular value of $A$ is, the faster the error decays in the corresponding singular vector space, and the convergence directions are mainly driven by the large singular values at the beginning, then gradually driven by the small singular values, and finally by the smallest nonzero singular value. These results explain the phenomenon found in the extensive numerical experiments appearing in the literature that these two methods seem to converge faster at the beginning. Numerical examples are provided to confirm the above findings.
The discretization of surface intrinsic PDEs has challenges that one might not face in the flat space. The closest point method (CPM) is an embedding method that represents surfaces using a function that maps points in the flat space to their closest points on the surface. This mapping brings intrinsic data onto the embedding space, allowing us to numerically approximate PDEs by the standard methods in the tubular neighborhood of the surface. Here, we solve the surface intrinsic positive Helmholtz equation by the CPM paired with finite differences which usually yields a large, sparse, and non-symmetric system. Domain decomposition methods, especially Schwarz methods, are robust algorithms to solve these linear systems. While there have been substantial works on Schwarz methods, Schwarz methods for solving surface differential equations have not been widely analyzed. In this work, we investigate the convergence of the CPM coupled with Schwarz method on 1-manifolds in d-dimensional space of real numbers.
In this paper, we study an adaptive planewave method for multiple eigenvalues of second-order elliptic partial equations. Inspired by the technique for the adaptive finite element analysis, we prove that the adaptive planewave method has the linear convergence rate and optimal complexity.
Using deep neural networks to solve PDEs has attracted a lot of attentions recently. However, why the deep learning method works is falling far behind its empirical success. In this paper, we provide a rigorous numerical analysis on deep Ritz method (DRM) cite{wan11} for second order elliptic equations with Neumann boundary conditions. We establish the first nonasymptotic convergence rate in $H^1$ norm for DRM using deep networks with $mathrm{ReLU}^2$ activation functions. In addition to providing a theoretical justification of DRM, our study also shed light on how to set the hyper-parameter of depth and width to achieve the desired convergence rate in terms of number of training samples. Technically, we derive bounds on the approximation error of deep $mathrm{ReLU}^2$ network in $H^1$ norm and on the Rademacher complexity of the non-Lipschitz composition of gradient norm and $mathrm{ReLU}^2$ network, both of which are of independent interest.