ﻻ يوجد ملخص باللغة العربية
In this paper, we conduct a convergence rate analysis of the augmented Lagrangian method with a practical relative error criterion designed in Eckstein and Silva [Math. Program., 141, 319--348 (2013)] for convex nonlinear programming problems. We show that under a mild local error bound condition, this method admits locally a Q-linear rate of convergence. More importantly, we show that the modulus of the convergence rate is inversely proportional to the penalty parameter. That is, an asymptotically superlinear convergence is obtained if the penalty parameter used in the algorithm is increasing to infinity, or an arbitrarily Q-linear rate of convergence can be guaranteed if the penalty parameter is fixed but it is sufficiently large. Besides, as a byproduct, the convergence, as well as the convergence rate, of the distance from the primal sequence to the solution set of the problem is obtained.
A multiplicative relative value iteration algorithm for solving the dynamic programming equation for the risk-sensitive control problem is studied for discrete time controlled Markov chains with a compact Polish state space, and controlled diffusions
In this paper, we follow the recent works about the explicit superlinear convergence rate of quasi-Newton methods. We focus on classical Broydens methods for solving nonlinear equations and establish explicit (local) superlinear convergence if the in
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex. Our approach consists of approximately solving a sequence of sub-problems induced by the accelera
We introduce a twice differentiable augmented Lagrangian for nonlinear optimization with general inequality constraints and show that a strict local minimizer of the original problem is an approximate strict local solution of the augmented Lagrangian
Nonlinearly constrained nonconvex and nonsmooth optimization models play an increasingly important role in machine learning, statistics and data analytics. In this paper, based on the augmented Lagrangian function we introduce a flexible first-order