ترغب بنشر مسار تعليمي؟ اضغط هنا

A Convergence Analysis of the Parallel Schwarz Solution of the Continuous Closest Point Method

81   0   0.0 ( 0 )
 نشر من قبل Alireza Yazdani
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The discretization of surface intrinsic PDEs has challenges that one might not face in the flat space. The closest point method (CPM) is an embedding method that represents surfaces using a function that maps points in the flat space to their closest points on the surface. This mapping brings intrinsic data onto the embedding space, allowing us to numerically approximate PDEs by the standard methods in the tubular neighborhood of the surface. Here, we solve the surface intrinsic positive Helmholtz equation by the CPM paired with finite differences which usually yields a large, sparse, and non-symmetric system. Domain decomposition methods, especially Schwarz methods, are robust algorithms to solve these linear systems. While there have been substantial works on Schwarz methods, Schwarz methods for solving surface differential equations have not been widely analyzed. In this work, we investigate the convergence of the CPM coupled with Schwarz method on 1-manifolds in d-dimensional space of real numbers.



قيم البحث

اقرأ أيضاً

In contrast with classical Schwarz theory, recent results in computational chemistry have shown that for special domain geometries, the one-level parallel Schwarz method can be scalable. This property is not true in general, and the issue of quantify ing the lack of scalability remains an open problem. Even though heuristic explanations are given in the literature, a rigorous and systematic analysis is still missing. In this short manuscript, we provide a first rigorous result that precisely quantifies the lack of scalability of the classical one-level parallel Schwarz method for the solution to the one-dimensional Laplace equation. Our analysis technique provides a possible roadmap for a systematic extension to more realistic problems in higher dimensions.
In this article, we analyse the convergence behaviour and scalability properties of the one-level Parallel Schwarz method (PSM) for domain decomposition problems in which the boundaries of many subdomains lie in the interior of the global domain. Suc h problems arise, for instance, in solvation models in computational chemistry. Existing results on the scalability of the one-level PSM are limited to situations where each subdomain has access to the external boundary, and at most only two subdomains have a common overlap. We develop a systematic framework that allows us to bound the norm of the Schwarz iteration operator for domain decomposition problems in which subdomains may be completely embedded in the interior of the global domain and an arbitrary number of subdomains may have a common overlap.
296 - Jongho Park 2019
In this paper, we propose an overlapping additive Schwarz method for total variation minimization based on a dual formulation. The $O(1/n)$-energy convergence of the proposed method is proven, where $n$ is the number of iterations. In addition, we in troduce an interesting convergence property called pseudo-linear convergence of the proposed method; the energy of the proposed method decreases as fast as linearly convergent algorithms until it reaches a particular value. It is shown that such the particular value depends on the overlapping width $delta$, and the proposed method becomes as efficient as linearly convergent algorithms if $delta$ is large. As the latest domain decomposition methods for total variation minimization are sublinearly convergent, the proposed method outperforms them in the sense of the energy decay. Numerical experiments which support our theoretical results are provided.
68 - Trung Nguyen 2021
Regula Falsi, or the method of false position, is a numerical method for finding an approximate solution to f(x) = 0 on a finite interval [a, b], where f is a real-valued continuous function on [a, b] and satisfies f(a)f(b) < 0. Previous studies prov ed the convergence of this method under certain assumptions about the function f, such as both the first and second derivatives of f do not change the sign on the interval [a, b]. In this paper, we remove those assumptions and prove the convergence of the method for all continuous functions.
Using deep neural networks to solve PDEs has attracted a lot of attentions recently. However, why the deep learning method works is falling far behind its empirical success. In this paper, we provide a rigorous numerical analysis on deep Ritz method (DRM) cite{wan11} for second order elliptic equations with Neumann boundary conditions. We establish the first nonasymptotic convergence rate in $H^1$ norm for DRM using deep networks with $mathrm{ReLU}^2$ activation functions. In addition to providing a theoretical justification of DRM, our study also shed light on how to set the hyper-parameter of depth and width to achieve the desired convergence rate in terms of number of training samples. Technically, we derive bounds on the approximation error of deep $mathrm{ReLU}^2$ network in $H^1$ norm and on the Rademacher complexity of the non-Lipschitz composition of gradient norm and $mathrm{ReLU}^2$ network, both of which are of independent interest.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا