ترغب بنشر مسار تعليمي؟ اضغط هنا

Additive Schwarz Methods for Convex Optimization as Gradient Methods

84   0   0.0 ( 0 )
 نشر من قبل Jongho Park
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Jongho Park




اسأل ChatGPT حول البحث

This paper gives a unified convergence analysis of additive Schwarz methods for general convex optimization problems. Resembling to the fact that additive Schwarz methods for linear problems are preconditioned Richardson methods, we prove that additive Schwarz methods for general convex optimization are in fact gradient methods. Then an abstract framework for convergence analysis of additive Schwarz methods is proposed. The proposed framework applied to linear elliptic problems agrees with the classical theory. We present applications of the proposed framework to various interesting convex optimization problems such as nonlinear elliptic problems, nonsmooth problems, and nonsharp problems.

قيم البحث

اقرأ أيضاً

72 - Jongho Park 2020
Based on an observation that additive Schwarz methods for general convex optimization can be interpreted as gradient methods, we propose an acceleration scheme for additive Schwarz methods. Adopting acceleration techniques developed for gradient meth ods such as momentum and adaptive restarting, the convergence rate of additive Schwarz methods is greatly improved. The proposed acceleration scheme does not require any a priori information on the levels of smoothness and sharpness of a target energy functional, so that it can be applied to various convex optimization problems. Numerical results for linear elliptic problems, nonlinear elliptic problems, nonsmooth problems, and nonsharp problems are provided to highlight the superiority and the broad applicability of the proposed scheme.
We provide new adaptive first-order methods for constrained convex optimization. Our main algorithms AdaACSA and AdaAGD+ are accelerated methods, which are universal in the sense that they achieve nearly-optimal convergence rates for both smooth and non-smooth functions, even when they only have access to stochastic gradients. In addition, they do not require any prior knowledge on how the objective function is parametrized, since they automatically adjust their per-coordinate learning rate. These can be seen as truly accelerated Adagrad methods for constrained optimization. We complement them with a simpler algorithm AdaGrad+ which enjoys the same features, and achieves the standard non-accelerated convergence rate. We also present a set of new results involving adaptive methods for unconstrained optimization and monotone operators.
The Fast Proximal Gradient Method (FPGM) and the Monotone FPGM (MFPGM) for minimization of nonsmooth convex functions are introduced and applied to tomographic image reconstruction. Convergence properties of the sequence of objective function values are derived, including a $Oleft(1/k^{2}right)$ non-asymptotic bound. The presented theory broadens current knowledge and explains the convergence behavior of certain methods that are known to present good practical performance. Numerical experimentation involving computerized tomography image reconstruction shows the methods to be competitive in practical scenarios. Experimental comparison with Algebraic Reconstruction Techniques are performed uncovering certain behaviors of accelerated Proximal Gradient algorithms that apparently have not yet been noticed when these are applied to tomographic image reconstruction.
In ptychography experiments, redundant scanning is usually required to guarantee the stable recovery, such that a huge amount of frames are generated, and thus it poses a great demand of parallel computing in order to solve this large-scale inverse p roblem. In this paper, we propose the overlapping Domain Decomposition Methods(DDMs) to solve the nonconvex optimization problem in ptychographic imaging. They decouple the problem defined on the whole domain into subproblems only defined on the subdomains with synchronizing information in the overlapping regions of these subdomains,thus leading to highly parallel algorithms with good load balance. More specifically, for the nonblind recovery (with known probe in advance), by enforcing the continuity of the overlapping regions for the image (sample), the nonlinear optimization model is established based on a novel smooth-truncated amplitude-Gaussian metric (ST-AGM). Such metric allows for fast calculation of the proximal mapping with closed form, and meanwhile provides the possibility for the convergence guarantee of the first-order nonconvex optimization algorithm due to its Lipschitz smoothness. Then the Alternating Direction Method of Multipliers (ADMM) is utilized to generate an efficient Overlapping Domain Decomposition based Ptychography algorithm(OD2P) for the two-subdomain domain decomposition (DD), where all subproblems can be computed with close-form solutions.Due to the Lipschitz continuity for the gradient of the objective function with ST-AGM, the convergence of the proposed OD2P is derived under mild conditions. Moreover, it is extended to more general case including multiple-subdomain DD and blind recovery. Numerical experiments are further conducted to show the performance of proposed algorithms, demonstrating good convergence speed and robustness to the noise.
We develop a theoretical foundation for the application of Nesterovs accelerated gradient descent method (AGD) to the approximation of solutions of a wide class of partial differential equations (PDEs). This is achieved by proving the existence of an invariant set and exponential convergence rates when its preconditioned version (PAGD) is applied to minimize locally Lipschitz smooth, strongly convex objective functionals. We introduce a second-order ordinary differential equation (ODE) with a preconditioner built-in and show that PAGD is an explicit time-discretization of this ODE, which requires a natural time step restriction for energy stability. At the continuous time level, we show an exponential convergence of the ODE solution to its steady state using a simple energy argument. At the discrete level, assuming the aforementioned step size restriction, the existence of an invariant set is proved and a matching exponential rate of convergence of the PAGD scheme is derived by mimicking the energy argument and the convergence at the continuous level. Applications of the PAGD method to numerical PDEs are demonstrated with certain nonlinear elliptic PDEs using pseudo-spectral methods for spatial discretization, and several numerical experiments are conducted. The results confirm the global geometric and mesh size-independent convergence of the PAGD method, with an accelerated rate that is improved over the preconditioned gradient descent (PGD) method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا