Do you want to publish a course? Click here

A novel augmented Lagrangian method of multipliers for optimization with general inequality constraints

201   0   0.0 ( 0 )
 Added by Yakui Huang
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

We introduce a twice differentiable augmented Lagrangian for nonlinear optimization with general inequality constraints and show that a strict local minimizer of the original problem is an approximate strict local solution of the augmented Lagrangian. A novel augmented Lagrangian method of multipliers (ALM) is then presented. Our method is originated from a generalization of the Hetenes-Powell augmented Lagrangian, and is a combination of the augmented Lagrangian and the interior-point technique. It shares a similar algorithmic framework with existing ALMs for optimization with inequality constraints, but it can use the second derivatives and does not depend on projections on the set of inequality constraints. In each iteration, our method solves a twice continuously differentiable unconstrained optimization subproblem on primal variables. The dual iterates, penalty and smoothing parameters are updated adaptively. The global and local convergence are analyzed. Without assuming any constraint qualification, it is proved that the proposed method has strong global convergence. The method may converge to either a Kurash-Kuhn-Tucker (KKT) point or a singular stationary point when the converging point is a minimizer. It may also converge to an infeasible stationary point of nonlinear program when the problem is infeasible. Furthermore, our method is capable of rapidly detecting the possible infeasibility of the solved problem. Under suitable conditions, it is locally linearly convergent to the KKT point, which is consistent with ALMs for optimization with equality constraints. The preliminary numerical experiments on some small benchmark test problems demonstrate our theoretical results.



rate research

Read More

235 - Liwei Zhang , Yule Zhang , Jia Wu 2021
This paper considers the problem of minimizing a convex expectation function with a set of inequality convex expectation constraints. We present a computable stochastic approximation type algorithm, namely the stochastic linearized proximal method of multipliers, to solve this convex stochastic optimization problem. This algorithm can be roughly viewed as a hybrid of stochastic approximation and the traditional proximal method of multipliers. Under mild conditions, we show that this algorithm exhibits $O(K^{-1/2})$ expected convergence rates for both objective reduction and constraint violation if parameters in the algorithm are properly chosen, where $K$ denotes the number of iterations. Moreover, we show that, with high probability, the algorithm has $O(log(K)K^{-1/2})$ constraint violation bound and $O(log^{3/2}(K)K^{-1/2})$ objective bound. Some preliminary numerical results demonstrate the performance of the proposed algorithm.
This paper is devoted to studying an inexact augmented Lagrangian method for solving a class of manifold optimization problems, which have non-smooth objective functions and non-negative constraints. Under the constant positive linear dependence condition on manifold, we show that the proposed method converges to a stationary point of the non-smooth manifold optimization problem. Moreover, we propose a globalized semi-smooth Newton method to solve the augmented Lagrangian subproblem on manifolds efficiently. The local superlinear convergence of the manifold semi-smooth Newton method is also established under some suitable conditions. Finally, numerical experiments on compressed modes and (constrained) sparse PCA illustrate the advantages of the proposed method in terms of accuracy and computational efficiency.
Nonlinearly constrained nonconvex and nonsmooth optimization models play an increasingly important role in machine learning, statistics and data analytics. In this paper, based on the augmented Lagrangian function we introduce a flexible first-order primal-dual method, to be called nonconvex auxiliary problem principle of augmented Lagrangian (NAPP-AL), for solving a class of nonlinearly constrained nonconvex and nonsmooth optimization problems. We demonstrate that NAPP-AL converges to a stationary solution at the rate of o(1/sqrt{k}), where k is the number of iterations. Moreover, under an additional error bound condition (to be called VP-EB in the paper), we further show that the convergence rate is in fact linear. Finally, we show that the famous Kurdyka- Lojasiewicz property and the metric subregularity imply the afore-mentioned VP-EB condition.
79 - Yinqiao Yan , Qingna Li 2019
Support vector machine (SVM) has proved to be a successful approach for machine learning. Two typical SVM models are the L1-loss model for support vector classification (SVC) and $epsilon$-L1-loss model for support vector regression (SVR). Due to the nonsmoothness of the L1-loss function in the two models, most of the traditional approaches focus on solving the dual problem. In this paper, we propose an augmented Lagrangian method for the L1-loss model, which is designed to solve the primal problem. By tackling the nonsmooth term in the model with Moreau-Yosida regularization and the proximal operator, the subproblem in augmented Lagrangian method reduces to a nonsmooth linear system, which can be solved via the quadratically convergent semismooth Newtons method. Moreover, the high computational cost in semismooth Newtons method can be significantly reduced by exploring the sparse structure in the generalized Jacobian. Numerical results on various datasets in LIBLINEAR show that the proposed method is competitive with the most popular solvers in both speed and accuracy.
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex. Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method, thereby providing a systematic way for deriving several well-known decentralized algorithms including EXTRA arXiv:1404.6264 and SSDA arXiv:1702.08704. When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds. We provide experimental results that demonstrate the effectiveness of the proposed algorithm on highly ill-conditioned problems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا