ترغب بنشر مسار تعليمي؟ اضغط هنا

A Majorized ADMM with Indefinite Proximal Terms for Linearly Constrained Convex Composite Optimization

122   0   0.0 ( 0 )
 نشر من قبل Defeng Sun
 تاريخ النشر 2014
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper presents a majorized alternating direction method of multipliers (ADMM) with indefinite proximal terms for solving linearly constrained $2$-block convex composite optimization problems with each block in the objective being the sum of a non-smooth convex function and a smooth convex function, i.e., $min_{x in {cal X}, ; y in {cal Y}}{p(x)+f(x) + q(y)+g(y)mid A^* x+B^* y = c}$. By choosing the indefinite proximal terms properly, we establish the global convergence and $O(1/k)$ ergodic iteration-complexity of the proposed method for the step-length $tau in (0, (1+sqrt{5})/2)$. The computational benefit of using indefinite proximal terms within the ADMM framework instead of the current requirement of positive semidefinite ones is also demonstrated numerically. This opens up a new way to improve the practical performance of the ADMM and related methods.



قيم البحث

اقرأ أيضاً

Minimax optimization problems are an important class of optimization problems arising from modern machine learning and traditional research areas. While there have been many numerical algorithms for solving smooth convex-concave minimax problems, num erical algorithms for nonsmooth convex-concave minimax problems are very rare. This paper aims to develop an efficient numerical algorithm for a structured nonsmooth convex-concave minimax problem. A majorized semi-proximal alternating coordinate method (mspACM) is proposed, in which a majorized quadratic convex-concave function is adopted for approximating the smooth part of the objective function and semi-proximal terms are added in each subproblem. This construction enables the subproblems at each iteration are solvable and even easily solved when the semiproximal terms are cleverly chosen. We prove the global convergence of the algorithm mspACM under mild assumptions, without requiring strong convexity-concavity condition. Under the locally metrical subregularity of the solution mapping, we prove that the algorithm mspACM has the linear rate of convergence. Preliminary numerical results are reported to verify the efficiency of the algorithm mspACM.
In this paper, an inexact proximal-point penalty method is studied for constrained optimization problems, where the objective function is non-convex, and the constraint functions can also be non-convex. The proposed method approximately solves a sequ ence of subproblems, each of which is formed by adding to the original objective function a proximal term and quadratic penalty terms associated to the constraint functions. Under a weak-convexity assumption, each subproblem is made strongly convex and can be solved effectively to a required accuracy by an optimal gradient-based method. The computational complexity of the proposed method is analyzed separately for the cases of convex constraint and non-convex constraint. For both cases, the complexity results are established in terms of the number of proximal gradient steps needed to find an $varepsilon$-stationary point. When the constraint functions are convex, we show a complexity result of $tilde O(varepsilon^{-5/2})$ to produce an $varepsilon$-stationary point under the Slaters condition. When the constraint functions are non-convex, the complexity becomes $tilde O(varepsilon^{-3})$ if a non-singularity condition holds on constraints and otherwise $tilde O(varepsilon^{-4})$ if a feasible initial solution is available.
70 - Wenbo Gao , Donald Goldfarb , 2018
We expand the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain verifiable assumptions, converges to the set of constrai ned stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large. When the Kurdyka-L{}ojasiewicz (K-L{}) property holds, this is strengthened to convergence to a single constrained stationary point. Our analysis applies under assumptions that we have endeavored to make as weak as possible. It applies to problems that involve nonconvex and/or nonsmooth objective terms, in addition to the multiaffine constraints that can involve multiple (three or more) blocks of variables. To illustrate the applicability of our results, we describe examples including nonnegative matrix factorization, sparse learning, risk parity portfolio selection, nonconvex formulations of convex problems, and neural network training. In each case, our ADMM approach encounters only subproblems that have closed-form solutions.
Decentralized optimization is a powerful paradigm that finds applications in engineering and learning design. This work studies decentralized composite optimization problems with non-smooth regularization terms. Most existing gradient-based proximal decentralized methods are known to converge to the optimal solution with sublinear rates, and it remains unclear whether this family of methods can achieve global linear convergence. To tackle this problem, this work assumes the non-smooth regularization term is common across all networked agents, which is the case for many machine learning problems. Under this condition, we design a proximal gradient decentralized algorithm whose fixed point coincides with the desired minimizer. We then provide a concise proof that establishes its linear convergence. In the absence of the non-smooth term, our analysis technique covers the well known EXTRA algorithm and provides useful bounds on the convergence rate and step-size.
The Alternating Direction Method of Multipliers (ADMM) has been proved to be effective for solving separable convex optimization subject to linear constraints. In this paper, we propose a Generalized Symmetric ADMM (GS-ADMM), which updates the Lagran ge multiplier twice with suitable stepsizes, to solve the multi-block separable convex programming. This GS-ADMM partitions the data into two group variables so that one group consists of $p$ block variables while the other has $q$ block variables, where $p ge 1$ and $q ge 1$ are two integers. The two grouped variables are updated in a {it Gauss-Seidel} scheme, while the variables within each group are updated in a {it Jacobi} scheme, which would make it very attractive for a big data setting. By adding proper proximal terms to the subproblems, we specify the domain of the stepsizes to guarantee that GS-ADMM is globally convergent with a worst-case $O(1/t)$ ergodic convergence rate. It turns out that our convergence domain of the stepsizes is significantly larger than other convergence domains in the literature. Hence, the GS-ADMM is more flexible and attractive on choosing and using larger stepsizes of the dual variable. Besides, two special cases of GS-ADMM, which allows using zero penalty terms, are also discussed and analyzed. Compared with several state-of-the-art methods, preliminary numerical experiments on solving a sparse matrix minimization problem in the statistical learning show that our proposed method is effective and promising.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا