ﻻ يوجد ملخص باللغة العربية
We introduce a twice differentiable augmented Lagrangian for nonlinear optimization with general inequality constraints and show that a strict local minimizer of the original problem is an approximate strict local solution of the augmented Lagrangian. A novel augmented Lagrangian method of multipliers (ALM) is then presented. Our method is originated from a generalization of the Hetenes-Powell augmented Lagrangian, and is a combination of the augmented Lagrangian and the interior-point technique. It shares a similar algorithmic framework with existing ALMs for optimization with inequality constraints, but it can use the second derivatives and does not depend on projections on the set of inequality constraints. In each iteration, our method solves a twice continuously differentiable unconstrained optimization subproblem on primal variables. The dual iterates, penalty and smoothing parameters are updated adaptively. The global and local convergence are analyzed. Without assuming any constraint qualification, it is proved that the proposed method has strong global convergence. The method may converge to either a Kurash-Kuhn-Tucker (KKT) point or a singular stationary point when the converging point is a minimizer. It may also converge to an infeasible stationary point of nonlinear program when the problem is infeasible. Furthermore, our method is capable of rapidly detecting the possible infeasibility of the solved problem. Under suitable conditions, it is locally linearly convergent to the KKT point, which is consistent with ALMs for optimization with equality constraints. The preliminary numerical experiments on some small benchmark test problems demonstrate our theoretical results.
This paper considers the problem of minimizing a convex expectation function with a set of inequality convex expectation constraints. We present a computable stochastic approximation type algorithm, namely the stochastic linearized proximal method of
This paper is devoted to studying an inexact augmented Lagrangian method for solving a class of manifold optimization problems, which have non-smooth objective functions and non-negative constraints. Under the constant positive linear dependence cond
Nonlinearly constrained nonconvex and nonsmooth optimization models play an increasingly important role in machine learning, statistics and data analytics. In this paper, based on the augmented Lagrangian function we introduce a flexible first-order
Support vector machine (SVM) has proved to be a successful approach for machine learning. Two typical SVM models are the L1-loss model for support vector classification (SVC) and $epsilon$-L1-loss model for support vector regression (SVR). Due to the
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex. Our approach consists of approximately solving a sequence of sub-problems induced by the accelera