ترغب بنشر مسار تعليمي؟ اضغط هنا

Augmented Lagrangian based first-order methods for convex-constrained programs with weakly-convex objective

87   0   0.0 ( 0 )
 نشر من قبل Yangyang Xu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

First-order methods (FOMs) have been widely used for solving large-scale problems. A majority of existing works focus on problems without constraint or with simple constraints. Several recent works have studied FOMs for problems with complicated functional constraints. In this paper, we design a novel augmented Lagrangian (AL) based FOM for solving problems with non-convex objective and convex constraint functions. The new method follows the framework of the proximal point (PP) method. On approximately solving PP subproblems, it mixes the usage of the inexact AL method (iALM) and the quadratic penalty method, while the latter is always fed with estimated multipliers by the iALM. We show a complexity result of $O(varepsilon^{-frac{5}{2}}|logvarepsilon|)$ for the proposed method to achieve an $varepsilon$-KKT point. This is the best known result. Theoretically, the hybrid method has lower iteration-complexity requirement than its counterpart that only uses iALM to solve PP subproblems, and numerically, it can perform significantly better than a pure-penalty-based method. Numerical experiments are conducted on nonconvex linearly constrained quadratic programs and nonconvex QCQP. The numerical results demonstrate the efficiency of the proposed methods over existing ones.



قيم البحث

اقرأ أيضاً

84 - Zhaosong Lu , Zirui Zhou 2018
In this paper we consider a class of convex conic programming. In particular, we propose an inexact augmented Lagrangian (I-AL) method for solving this problem, in which the augmented Lagrangian subproblems are solved approximately by a variant of Ne sterovs optimal first-order method. We show that the total number of first-order iterations of the proposed I-AL method for computing an $epsilon$-KKT solution is at most $mathcal{O}(epsilon^{-7/4})$. We also propose a modified I-AL method and show that it has an improved iteration-complexity $mathcal{O}(epsilon^{-1}logepsilon^{-1})$, which is so far the lowest complexity bound among all first-order I-AL type of methods for computing an $epsilon$-KKT solution. Our complexity analysis of the I-AL methods is mainly based on an analysis on inexact proximal point algorithm (PPA) and the link between the I-AL methods and inexact PPA. It is substantially different from the existing complexity analyses of the first-order I-AL methods in the literature, which typically regard the I-AL methods as an inexact dual gradient method. Compared to the mostly related I-AL methods cite{Lan16}, our modified I-AL method is more practically efficient and also applicable to a broader class of problems.
Optimization models with non-convex constraints arise in many tasks in machine learning, e.g., learning with fairness constraints or Neyman-Pearson classification with non-convex loss. Although many efficient methods have been developed with theoreti cal convergence guarantees for non-convex unconstrained problems, it remains a challenge to design provably efficient algorithms for problems with non-convex functional constraints. This paper proposes a class of subgradient methods for constrained optimization where the objective function and the constraint functions are are weakly convex. Our methods solve a sequence of strongly convex subproblems, where a proximal term is added to both the objective function and each constraint function. Each subproblem can be solved by various algorithms for strongly convex optimization. Under a uniform Slaters condition, we establish the computation complexities of our methods for finding a nearly stationary point.
175 - Shengjie Xu 2021
The augmented Lagrangian method (ALM) is a fundamental tool for solving the canonical convex minimization problem with linear constraints, and efficiently and easily how to implement the original ALM is affirmatively significant. Recently, He and Yua n have proposed a balanced version of ALM [B.S. He and X.M. Yuan, arXiv:2108.08554, 2021], which reshapes the original ALM by balancing its subproblems and makes the benchmark ALM easier to implement without any additional condition. In practice, the balanced ALM updates the new iterate by a primal-dual order. In this note, exploiting the variational inequality structure of the most recent balanced ALM, we propose a dual-primal version of the balanced ALM for linearly constrained convex minimization problems. The novel proposed method generates the new iterate by a dual-primal order and enjoys the same computational difficulty with the original primal-dual balanced ALM. Furthermore, under the lens of the proximal point algorithm, we conduct the convergence analysis of the novel introduced method in the context of variational inequalities. Numerical tests on the basic pursuit problem demonstrate that the introduced method enjoys the same high efficiency with the prototype balanced ALM.
Shape-constrained convex regression problem deals with fitting a convex function to the observed data, where additional constraints are imposed, such as component-wise monotonicity and uniform Lipschitz continuity. This paper provides a unified frame work for computing the least squares estimator of a multivariate shape-constrained convex regression function in $mathbb{R}^d$. We prove that the least squares estimator is computable via solving an essentially constrained convex quadratic programming (QP) problem with $(n+1)d$ variables, $n(n-1)$ linear inequality constraints and $n$ possibly non-polyhedral inequality constraints, where $n$ is the number of data points. To efficiently solve the generally very large-scale convex QP, we design a proximal augmented Lagrangian method (proxALM) whose subproblems are solved by the semismooth Newton method (SSN). To further accelerate the computation when $n$ is huge, we design a practical implementation of the constraint generation method such that each reduced problem is efficiently solved by our proposed proxALM. Comprehensive numerical experiments, including those in the pricing of basket options and estimation of production functions in economics, demonstrate that our proposed proxALM outperforms the state-of-the-art algorithms, and the proposed acceleration technique further shortens the computation time by a large margin.
72 - Jongho Park 2020
Based on an observation that additive Schwarz methods for general convex optimization can be interpreted as gradient methods, we propose an acceleration scheme for additive Schwarz methods. Adopting acceleration techniques developed for gradient meth ods such as momentum and adaptive restarting, the convergence rate of additive Schwarz methods is greatly improved. The proposed acceleration scheme does not require any a priori information on the levels of smoothness and sharpness of a target energy functional, so that it can be applied to various convex optimization problems. Numerical results for linear elliptic problems, nonlinear elliptic problems, nonsmooth problems, and nonsharp problems are provided to highlight the superiority and the broad applicability of the proposed scheme.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا