ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Linear and Asymptotically Superlinear Convergence Rates of the Augmented Lagrangian Method with a Practical Relative Error Criterion

51   0   0.0 ( 0 )
 نشر من قبل Liang Chen
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we conduct a convergence rate analysis of the augmented Lagrangian method with a practical relative error criterion designed in Eckstein and Silva [Math. Program., 141, 319--348 (2013)] for convex nonlinear programming problems. We show that under a mild local error bound condition, this method admits locally a Q-linear rate of convergence. More importantly, we show that the modulus of the convergence rate is inversely proportional to the penalty parameter. That is, an asymptotically superlinear convergence is obtained if the penalty parameter used in the algorithm is increasing to infinity, or an arbitrarily Q-linear rate of convergence can be guaranteed if the penalty parameter is fixed but it is sufficiently large. Besides, as a byproduct, the convergence, as well as the convergence rate, of the distance from the primal sequence to the solution set of the problem is obtained.


قيم البحث

اقرأ أيضاً

A multiplicative relative value iteration algorithm for solving the dynamic programming equation for the risk-sensitive control problem is studied for discrete time controlled Markov chains with a compact Polish state space, and controlled diffusions in on the whole Euclidean space. The main result is a proof of convergence to the desired limit in each case.
In this paper, we follow the recent works about the explicit superlinear convergence rate of quasi-Newton methods. We focus on classical Broydens methods for solving nonlinear equations and establish explicit (local) superlinear convergence if the in itial parameter and approximate Jacobian is close enough to the solution. Our results show two natural trade-offs. The first one is between the superlinear convergence rate and the radius of the neighborhood at initialization. The second one is the balance of the initial distance with the solution and its Jacobian. Moreover, our analysis covers two original Broydens methods: Broydens good and bad methods. We discover the difference between them in the scope of local convergence region and the condition number dependence.
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex. Our approach consists of approximately solving a sequence of sub-problems induced by the accelera ted augmented Lagrangian method, thereby providing a systematic way for deriving several well-known decentralized algorithms including EXTRA arXiv:1404.6264 and SSDA arXiv:1702.08704. When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds. We provide experimental results that demonstrate the effectiveness of the proposed algorithm on highly ill-conditioned problems.
We introduce a twice differentiable augmented Lagrangian for nonlinear optimization with general inequality constraints and show that a strict local minimizer of the original problem is an approximate strict local solution of the augmented Lagrangian . A novel augmented Lagrangian method of multipliers (ALM) is then presented. Our method is originated from a generalization of the Hetenes-Powell augmented Lagrangian, and is a combination of the augmented Lagrangian and the interior-point technique. It shares a similar algorithmic framework with existing ALMs for optimization with inequality constraints, but it can use the second derivatives and does not depend on projections on the set of inequality constraints. In each iteration, our method solves a twice continuously differentiable unconstrained optimization subproblem on primal variables. The dual iterates, penalty and smoothing parameters are updated adaptively. The global and local convergence are analyzed. Without assuming any constraint qualification, it is proved that the proposed method has strong global convergence. The method may converge to either a Kurash-Kuhn-Tucker (KKT) point or a singular stationary point when the converging point is a minimizer. It may also converge to an infeasible stationary point of nonlinear program when the problem is infeasible. Furthermore, our method is capable of rapidly detecting the possible infeasibility of the solved problem. Under suitable conditions, it is locally linearly convergent to the KKT point, which is consistent with ALMs for optimization with equality constraints. The preliminary numerical experiments on some small benchmark test problems demonstrate our theoretical results.
Nonlinearly constrained nonconvex and nonsmooth optimization models play an increasingly important role in machine learning, statistics and data analytics. In this paper, based on the augmented Lagrangian function we introduce a flexible first-order primal-dual method, to be called nonconvex auxiliary problem principle of augmented Lagrangian (NAPP-AL), for solving a class of nonlinearly constrained nonconvex and nonsmooth optimization problems. We demonstrate that NAPP-AL converges to a stationary solution at the rate of o(1/sqrt{k}), where k is the number of iterations. Moreover, under an additional error bound condition (to be called VP-EB in the paper), we further show that the convergence rate is in fact linear. Finally, we show that the famous Kurdyka- Lojasiewicz property and the metric subregularity imply the afore-mentioned VP-EB condition.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا