ترغب بنشر مسار تعليمي؟ اضغط هنا

On the rate of convergence of the Gaver-Stehfest algorithm

84   0   0.0 ( 0 )
 نشر من قبل Alexey Kuznetsov
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The Gaver-Stehfest algorithm is widely used for numerical inversion of Laplace transform. In this paper we provide the first rigorous study of the rate of convergence of the Gaver-Stehfest algorithm. We prove that Gaver-Stehfest approximations converge exponentially fast if the target function is analytic in a neighbourhood of a point and they converge at a rate $o(n^{-k})$ if the target function is $(2k+3)$-times differentiable at a point.



قيم البحث

اقرأ أيضاً

In recent years, contour-based eigensolvers have emerged as a standard approach for the solution of large and sparse eigenvalue problems. Building upon recent performance improvements through non-linear least square optimization of so-called rational filters, we introduce a systematic method to design these filters by minimizing the worst-case convergence ratio and eliminate the parametric dependence on weight functions. Further, we provide an efficient way to deal with the box-constraints which play a central role for the use of iterative linear solvers in contour-based eigensolvers. Indeed, these parameter-free filters consistently minimize the number of iterations and the number of FLOPs to reach convergence in the eigensolver. As a byproduct, our rational filters allow for a simple solution to load balancing when the solution of an interior eigenproblem is approached by the slicing of the sought after spectral interval.
Using deep neural networks to solve PDEs has attracted a lot of attentions recently. However, why the deep learning method works is falling far behind its empirical success. In this paper, we provide a rigorous numerical analysis on deep Ritz method (DRM) cite{wan11} for second order elliptic equations with Neumann boundary conditions. We establish the first nonasymptotic convergence rate in $H^1$ norm for DRM using deep networks with $mathrm{ReLU}^2$ activation functions. In addition to providing a theoretical justification of DRM, our study also shed light on how to set the hyper-parameter of depth and width to achieve the desired convergence rate in terms of number of training samples. Technically, we derive bounds on the approximation error of deep $mathrm{ReLU}^2$ network in $H^1$ norm and on the Rademacher complexity of the non-Lipschitz composition of gradient norm and $mathrm{ReLU}^2$ network, both of which are of independent interest.
In this paper, we examine the effectiveness of classic multiscale finite element method (MsFEM) (Hou and Wu, 1997; Hou et al., 1999) for mixed Dirichlet-Neumann, Robin and hemivariational inequality boundary problems. Constructing so-called boundary correctors is a common technique in existing methods to prove the convergence rate of MsFEM, while we think not reflects the essence of those problems. Instead, we focus on the first-order expansion structure. Through recently developed estimations in homogenization theory, our convergence rate is provided with milder assumptions and in neat forms.
In this work, we determine the full expression of the local truncation error of hyperbolic partial differential equations (PDEs) on a uniform mesh. If we are employing a stable numerical scheme and the global solution error is of the same order of ac curacy as the global truncation error, we make the following observations in the asymptotic regime, where the truncation error is dominated by the powers of $Delta x$ and $Delta t$ rather than their coefficients. Assuming that we reach the asymptotic regime before the machine precision error takes over, (a) the order of convergence of stable numerical solutions of hyperbolic PDEs at constant ratio of $Delta t$ to $Delta x$ is governed by the minimum of the orders of the spatial and temporal discretizations, and (b) convergence cannot even be guaranteed under only spatial or temporal refinement. We have tested our theory against numerical methods employing Method of Lines and not against ones that treat space and time together, and we have not taken into consideration the reduction in the spatial and temporal orders of accuracy resulting from slope-limiting monotonicity-preserving strategies commonly applied to finite volume methods. Otherwise, our theory applies to any hyperbolic PDE, be it linear or non-linear, and employing finite difference, finite volume, or finite element discretization in space, and advanced in time with a predictor-corrector, multistep, or a deferred correction method. If the PDE is reduced to an ordinary differential equation (ODE) by specifying the spatial gradients of the dependent variable and the coefficients and the source terms to be zero, then the standard local truncation error of the ODE is recovered. We perform the analysis with generic and specific hyperbolic PDEs using the symbolic algebra package SymPy, and conduct a number of numerical experiments to demonstrate our theoretical findings.
68 - Trung Nguyen 2021
Regula Falsi, or the method of false position, is a numerical method for finding an approximate solution to f(x) = 0 on a finite interval [a, b], where f is a real-valued continuous function on [a, b] and satisfies f(a)f(b) < 0. Previous studies prov ed the convergence of this method under certain assumptions about the function f, such as both the first and second derivatives of f do not change the sign on the interval [a, b]. In this paper, we remove those assumptions and prove the convergence of the method for all continuous functions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا