ترغب بنشر مسار تعليمي؟ اضغط هنا

A New Insight on Augmented Lagrangian Method and Its Extensions

127   0   0.0 ( 0 )
 نشر من قبل Jianchao Bai
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Jianchao Bai




اسأل ChatGPT حول البحث

Motivated by the recent work [He-Yuan, Balanced Augmented Lagrangian Method for Convex Programming, arXiv: 2108.08554v1, (2021)], a novel Augmented Lagrangian Method (ALM) has been proposed for solving a family of convex optimization problem subject to equality or inequality constraint. This new method is then extended to solve the multi-block separable convex optimization problem, and two related primal-dual hybrid gradient algorithms are also discussed. Preliminary and some new convergence results are established with the aid of variational analysis for both the saddle point of the problem and the first-order optimality conditions of involved subproblems.

قيم البحث

اقرأ أيضاً

This paper is concerned with a novel deep learning method for variational problems with essential boundary conditions. To this end, we first reformulate the original problem into a minimax problem corresponding to a feasible augmented Lagrangian, whi ch can be solved by the augmented Lagrangian method in an infinite dimensional setting. Based on this, by expressing the primal and dual variables with two individual deep neural network functions, we present an augmented Lagrangian deep learning method for which the parameters are trained by the stochastic optimization method together with a projection technique. Compared to the traditional penalty method, the new method admits two main advantages: i) the choice of the penalty parameter is flexible and robust, and ii) the numerical solution is more accurate in the same magnitude of computational cost. As typical applications, we apply the new approach to solve elliptic problems and (nonlinear) eigenvalue problems with essential boundary conditions, and numerical experiments are presented to show the effectiveness of the new method.
The magnetohydrodynamics (MHD) equations are generally known to be difficult to solve numerically, due to their highly nonlinear structure and the strong coupling between the electromagnetic and hydrodynamic variables, especially for high Reynolds an d coupling numbers. In this work, we present a scalable augmented Lagrangian preconditioner for a finite element discretization of the $mathbf{B}$-$mathbf{E}$ formulation of the incompressible viscoresistive MHD equations. For stationary problems, our solver achieves robust performance with respect to the Reynolds and coupling numbers in two dimensions and good results in three dimensions. We extend our method to fully implicit methods for time-dependent problems which we solve robustly in both two and three dimensions. Our approach relies on specialized parameter-robust multigrid methods for the hydrodynamic and electromagnetic blocks. The scheme ensures exactly divergence-free approximations of both the velocity and the magnetic field up to solver tolerances. We confirm the robustness of our solver by numerical experiments in which we consider fluid and magnetic Reynolds numbers and coupling numbers up to 10,000 for stationary problems and up to 100,000 for transient problems in two and three dimensions.
In current work, non-familiar shifted Lucas polynomials are introduced. We have constructed a computational wavelet technique for solution of initial/boundary value second order differential equations. For this numerical scheme, we have developed wei ght function and Rodrigues formula for Lucas polynomials. Further, Lucas polynomials and their properties are used to propose shifted Lucas polynomials and then utilization of shifted Lucas polynomials provides us shifted Lucas wavelet. We furnished the operational matrix of differentiation and the product operational matrix of the shifted Lucas wavelets. Moreover, convergence and error analysis ensure accuracy of the proposed method. Illustrative examples show that the present method is numerically fruitful, effective and convenient for solving differential equations
105 - Kirk M. Soodhalter 2021
Subspace recycling iterative methods and other subspace augmentation schemes are a successful extension to Krylov subspace methods in which a Krylov subspace is augmented with a fixed subspace spanned by vectors deemed to be helpful in accelerating c onvergence or conveying knowledge of the solution. Recently, a survey was published, in which a framework describing the vast majority of such methods was proposed [Soodhalter et al, GAMM-Mitt. 2020]. In many of these methods, the Krylov subspace is one generated by the system matrix composed with a projector that depends on the augmentation space. However, it is not a requirement that a projected Krylov subspace be used. There are augmentation methods built on using Krylov subspaces generated by the original system matrix, and these methods also fit into the general framework. In this note, we observe that one gains implementation benefits by considering such augmentation methods with unprojected Krylov subspaces in the general framework. We demonstrate this by applying the idea to the R$^3$GMRES method proposed in [Dong et al. ETNA 2014] to obtain a simplified implementation and to connect that algorithm to early augmentation schemes based on flexible preconditioning [Saad. SIMAX 1997].
We propose an accurate algorithm for a novel sum-of-exponentials (SOE) approximation of kernel functions, and develop a fast algorithm for convolution quadrature based on the SOE, which allows an order $N$ calculation for $N$ time steps of approximat ing a continuous temporal convolution integral. The SOE method is constructed by a combination of the de la Vallee-Poussin sums for a semi-analytical exponential expansion of a general kernel, and a model reduction technique for the minimization of the number of exponentials under given error tolerance. We employ the SOE expansion for the finite part of the splitting convolution kernel such that the convolution integral can be solved as a system of ordinary differential equations due to the exponential kernels. The remaining part is explicitly approximated by employing the generalized Taylor expansion. The significant features of our algorithm are that the SOE method is efficient and accurate, and works for general kernels with controllable upperbound of positive exponents. We provide numerical analysis for the SOE-based convolution quadrature. Numerical results on different kernels, the convolution integral and integral equations demonstrate attractive performance of both accuracy and efficiency of the proposed method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا