ترغب بنشر مسار تعليمي؟ اضغط هنا

A Survey of Algorithms for Separable Convex Optimization with Linear Ascending Constraints

84   0   0.0 ( 0 )
 نشر من قبل Akhil P T
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The paper considers the minimization of a separable convex function subject to linear ascending constraints. The problem arises as the core optimization in several resource allocation scenarios, and is a special case of an optimization of a separable convex function over the bases of a polymatroid with a certain structure. The paper presents a survey of state-of-the-art algorithms that solve this optimization problem. The algorithms are applicable to the class of separable convex objective functions that need not be smooth or strictly convex. When the objective function is a so-called $d$-separable function, a simpler linear time algorithm solves the problem.

قيم البحث

اقرأ أيضاً

63 - Zizhuo Wang 2012
In this paper, we propose two algorithms for solving convex optimization problems with linear ascending constraints. When the objective function is separable, we propose a dual method which terminates in a finite number of iterations. In particular, the worst case complexity of our dual method improves over the best-known result for this problem in Padakandla and Sundaresan [SIAM J. Optimization, 20 (2009), pp. 1185-1204]. We then propose a gradient projection method to solve a more general class of problems in which the objective function is not necessarily separable. Numerical experiments show that both our algorithms work well in test problems.
117 - Run Chen , Andrew L. Liu 2021
This paper first proposes an N-block PCPM algorithm to solve N-block convex optimization problems with both linear and nonlinear constraints, with global convergence established. A linear convergence rate under the strong second-order conditions for optimality is observed in the numerical experiments. Next, for a starting point, an asynchronous N-block PCPM algorithm is proposed to solve linearly constrained N-block convex optimization problems. The numerical results demonstrate the sub-linear convergence rate under the bounded delay assumption, as well as the faster convergence with more short-time iterations than a synchronous iterative scheme.
The Alternating Direction Method of Multipliers (ADMM) has been proved to be effective for solving separable convex optimization subject to linear constraints. In this paper, we propose a Generalized Symmetric ADMM (GS-ADMM), which updates the Lagran ge multiplier twice with suitable stepsizes, to solve the multi-block separable convex programming. This GS-ADMM partitions the data into two group variables so that one group consists of $p$ block variables while the other has $q$ block variables, where $p ge 1$ and $q ge 1$ are two integers. The two grouped variables are updated in a {it Gauss-Seidel} scheme, while the variables within each group are updated in a {it Jacobi} scheme, which would make it very attractive for a big data setting. By adding proper proximal terms to the subproblems, we specify the domain of the stepsizes to guarantee that GS-ADMM is globally convergent with a worst-case $O(1/t)$ ergodic convergence rate. It turns out that our convergence domain of the stepsizes is significantly larger than other convergence domains in the literature. Hence, the GS-ADMM is more flexible and attractive on choosing and using larger stepsizes of the dual variable. Besides, two special cases of GS-ADMM, which allows using zero penalty terms, are also discussed and analyzed. Compared with several state-of-the-art methods, preliminary numerical experiments on solving a sparse matrix minimization problem in the statistical learning show that our proposed method is effective and promising.
In this paper, we develop a parameterized proximal point algorithm (P-PPA) for solving a class of separable convex programming problems subject to linear and convex constraints. The proposed algorithm is provable to be globally convergent with a wors t-case O(1/t) convergence rate, wheret denotes the iteration number. By properly choosing the algorithm parameters, numerical experiments on solving a sparse optimization problem arising from statistical learning show that our P-PPA could perform significantly better than other state-of-the-art methods, such as the alternating direction method of multipliers and the relaxed proximal point algorithm.
An inexact accelerated stochastic Alternating Direction Method of Multipliers (AS-ADMM) scheme is developed for solving structured separable convex optimization problems with linear constraints. The objective function is the sum of a possibly nonsmoo th convex function and a smooth function which is an average of many component convex functions. Problems having this structure often arise in machine learning and data mining applications. AS-ADMM combines the ideas of both ADMM and the stochastic gradient methods using variance reduction techniques. One of the ADMM subproblems employs a linearization technique while a similar linearization could be introduced for the other subproblem. For a specified choice of the algorithm parameters, it is shown that the objective error and the constraint violation are $mathcal{O}(1/k)$ relative to the number of outer iterations $k$. Under a strong convexity assumption, the expected iterate error converges to zero linearly. A linearized variant of AS-ADMM and incremental sampling strategies are also discussed. Numerical experiments with both stochastic and deterministic ADMM algorithms show that AS-ADMM can be particularly effective for structured optimization arising in big data applications.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا