ترغب بنشر مسار تعليمي؟ اضغط هنا

A Novel Galerkin Method for Solving PDEs on the Sphere Using Highly Localized Kernel Bases

206   0   0.0 ( 0 )
 نشر من قبل Stephen Rowe
 تاريخ النشر 2014
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a novel Galerkin method for solving partial differential equations on the sphere. The problem is discretized by a highly localized basis which is easily constructed. The stiffness matrix entries are computed by a recently developed quadrature formula unique to the localized bases we consider. We present error estimates and investigate the stability of the discrete stiffness matrix. Implementation and numerical experiments are discussed.

قيم البحث

اقرأ أيضاً

270 - Hanyu Li , Yanjun Zhang 2020
With a quite different way to determine the working rows, we propose a novel greedy Kaczmarz method for solving consistent linear systems. Convergence analysis of the new method is provided. Numerical experiments show that, for the same accuracy, our method outperforms the greedy randomized Kaczmarz method and the relaxed greedy randomized Kaczmarz method introduced recently by Bai and Wu [Z.Z. BAI AND W.T. WU, On greedy randomized Kaczmarz method for solving large sparse linear systems, SIAM J. Sci. Comput., 40 (2018), pp. A592--A606; Z.Z. BAI AND W.T. WU, On relaxed greedy randomized Kaczmarz methods for solving large sparse linear systems, Appl. Math. Lett., 83 (2018), pp. 21--26] in term of the computing time.
Recently, collocation based radial basis function (RBF) partition of unity methods (PUM) for solving partial differential equations have been formulated and investigated numerically and theoretically. When combined with stable evaluation methods such as the RBF-QR method, high order convergence rates can be achieved and sustained under refinement. However, some numerical issues remain. The method is sensitive to the node layout, and condition numbers increase with the refinement level. Here, we propose a modified formulation based on least squares approximation. We show that the sensitivity to node layout is removed and that conditioning can be controlled through oversampling. We derive theoretical error estimates both for the collocation and least squares RBF-PUM. Numerical experiments are performed for the Poisson equation in two and three space dimensions for regular and irregular geometries. The convergence experiments confirm the theoretical estimates, and the least squares formulation is shown to be 5-10 times faster than the collocation formulation for the same accuracy.
In this paper we introduce a numerical method for solving nonlinear Volterra integro-differential equations. In the first step, we apply implicit trapezium rule to discretize the integral in given equation. Further, the Daftardar-Gejji and Jafari tec hnique (DJM) is used to find the unknown term on the right side. We derive existence-uniqueness theorem for such equations by using Lipschitz condition. We further present the error, convergence, stability and bifurcation analysis of the proposed method. We solve various types of equations using this method and compare the error with other numerical methods. It is observed that our method is more efficient than other numerical methods.
215 - Yanjun Zhang , Hanyu Li 2020
We present a novel greedy Gauss-Seidel method for solving large linear least squares problem. This method improves the greedy randomized coordinate descent (GRCD) method proposed recently by Bai and Wu [Bai ZZ, and Wu WT. On greedy randomized coordin ate descent methods for solving large linear least-squares problems. Numer Linear Algebra Appl. 2019;26(4):1--15], which in turn improves the popular randomized Gauss-Seidel method. Convergence analysis of the new method is provided. Numerical experiments show that, for the same accuracy, our method outperforms the GRCD method in term of the computing time.
This paper proposes a mesh-free computational framework and machine learning theory for solving elliptic PDEs on unknown manifolds, identified with point clouds, based on diffusion maps (DM) and deep learning. The PDE solver is formulated as a superv ised learning task to solve a least-squares regression problem that imposes an algebraic equation approximating a PDE (and boundary conditions if applicable). This algebraic equation involves a graph-Laplacian type matrix obtained via DM asymptotic expansion, which is a consistent estimator of second-order elliptic differential operators. The resulting numerical method is to solve a highly non-convex empirical risk minimization problem subjected to a solution from a hypothesis space of neural-network type functions. In a well-posed elliptic PDE setting, when the hypothesis space consists of feedforward neural networks with either infinite width or depth, we show that the global minimizer of the empirical loss function is a consistent solution in the limit of large training data. When the hypothesis space is a two-layer neural network, we show that for a sufficiently large width, the gradient descent method can identify a global minimizer of the empirical loss function. Supporting numerical examples demonstrate the convergence of the solutions and the effectiveness of the proposed solver in avoiding numerical issues that hampers the traditional approach when a large data set becomes available, e.g., large matrix inversion.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا