ترغب بنشر مسار تعليمي؟ اضغط هنا

Overlapping Domain Decomposition Methods for Ptychographic Imaging

69   0   0.0 ( 0 )
 نشر من قبل Huibin Chang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In ptychography experiments, redundant scanning is usually required to guarantee the stable recovery, such that a huge amount of frames are generated, and thus it poses a great demand of parallel computing in order to solve this large-scale inverse problem. In this paper, we propose the overlapping Domain Decomposition Methods(DDMs) to solve the nonconvex optimization problem in ptychographic imaging. They decouple the problem defined on the whole domain into subproblems only defined on the subdomains with synchronizing information in the overlapping regions of these subdomains,thus leading to highly parallel algorithms with good load balance. More specifically, for the nonblind recovery (with known probe in advance), by enforcing the continuity of the overlapping regions for the image (sample), the nonlinear optimization model is established based on a novel smooth-truncated amplitude-Gaussian metric (ST-AGM). Such metric allows for fast calculation of the proximal mapping with closed form, and meanwhile provides the possibility for the convergence guarantee of the first-order nonconvex optimization algorithm due to its Lipschitz smoothness. Then the Alternating Direction Method of Multipliers (ADMM) is utilized to generate an efficient Overlapping Domain Decomposition based Ptychography algorithm(OD2P) for the two-subdomain domain decomposition (DD), where all subproblems can be computed with close-form solutions.Due to the Lipschitz continuity for the gradient of the objective function with ST-AGM, the convergence of the proposed OD2P is derived under mild conditions. Moreover, it is extended to more general case including multiple-subdomain DD and blind recovery. Numerical experiments are further conducted to show the performance of proposed algorithms, demonstrating good convergence speed and robustness to the noise.



قيم البحث

اقرأ أيضاً

74 - Jongho Park 2020
In this paper, we propose a novel overlapping domain decomposition method that can be applied to various problems in variational imaging such as total variation minimization. Most of recent domain decomposition methods for total variation minimizatio n adopt the Fenchel--Rockafellar duality, whereas the proposed method is based on the primal formulation. Thus, the proposed method can be applied not only to total variation minimization but also to those with complex dual problems such as higher order models. In the proposed method, an equivalent formulation of the model problem with parallel structure is constructed using a custom overlapping domain decomposition scheme with the notion of essential domains. As a solver for the constructed formulation, we propose a decoupled augmented Lagrangian method for untying the coupling of adjacent subdomains. Convergence analysis of the decoupled augmented Lagrangian method is provided. We present implementation details and numerical examples for various model problems including total variation minimizations and higher order models.
209 - Chao Chen , , George Biros 2021
The discretization of certain integral equations, e.g., the first-kind Fredholm equation of Laplaces equation, leads to symmetric positive-definite linear systems, where the coefficient matrix is dense and often ill-conditioned. We introduce a new pr econditioner based on a novel overlapping domain decomposition that can be combined efficiently with fast direct solvers. Empirically, we observe that the condition number of the preconditioned system is $O(1)$, independent of the problem size. Our domain decomposition is designed so that we can construct approximate factorizations of the subproblems efficiently. In particular, we apply the recursive skeletonization algorithm to subproblems associated with every subdomain. We present numerical results on problem sizes up to $16,384^2$ in 2D and $256^3$ in 3D, which were solved in less than 16 hours and three hours, respectively, on an Intel Xeon Platinum 8280M.
83 - Jongho Park 2019
This paper gives a unified convergence analysis of additive Schwarz methods for general convex optimization problems. Resembling to the fact that additive Schwarz methods for linear problems are preconditioned Richardson methods, we prove that additi ve Schwarz methods for general convex optimization are in fact gradient methods. Then an abstract framework for convergence analysis of additive Schwarz methods is proposed. The proposed framework applied to linear elliptic problems agrees with the classical theory. We present applications of the proposed framework to various interesting convex optimization problems such as nonlinear elliptic problems, nonsmooth problems, and nonsharp problems.
233 - Yingjun Jiang , Xuejun Xu 2015
In this paper, a two-level additive Schwarz preconditioner is proposed for solving the algebraic systems resulting from the finite element approximations of space fractional partial differential equations (SFPDEs). It is shown that the condition numb er of the preconditioned system is bounded by C(1+H/delta), where H is the maximum diameter of subdomains and delta is the overlap size among the subdomains. Numerical results are given to support our theoretical findings.
This paper proposes a deep-learning-based domain decomposition method (DeepDDM), which leverages deep neural networks (DNN) to discretize the subproblems divided by domain decomposition methods (DDM) for solving partial differential equations (PDE). Using DNN to solve PDE is a physics-informed learning problem with the objective involving two terms, domain term and boundary term, which respectively make the desired solution satisfy the PDE and corresponding boundary conditions. DeepDDM will exchange the subproblem information across the interface in DDM by adjusting the boundary term for solving each subproblem by DNN. Benefiting from the simple implementation and mesh-free strategy of using DNN for PDE, DeepDDM will simplify the implementation of DDM and make DDM more flexible for complex PDE, e.g., those with complex interfaces in the computational domain. This paper will firstly investigate the performance of using DeepDDM for elliptic problems, including a model problem and an interface problem. The numerical examples demonstrate that DeepDDM exhibits behaviors consistent with conventional DDM: the number of iterations by DeepDDM is independent of network architecture and decreases with increasing overlapping size. The performance of DeepDDM on elliptic problems will encourage us to further investigate its performance for other kinds of PDE and may provide new insights for improving the PDE solver by deep learning.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا