ترغب بنشر مسار تعليمي؟ اضغط هنا

On Solving Convex Optimization Problems with Linear Ascending Constraints

113   0   0.0 ( 0 )
 نشر من قبل Zizhuo Wang
 تاريخ النشر 2012
  مجال البحث
والبحث باللغة English
 تأليف Zizhuo Wang




اسأل ChatGPT حول البحث

In this paper, we propose two algorithms for solving convex optimization problems with linear ascending constraints. When the objective function is separable, we propose a dual method which terminates in a finite number of iterations. In particular, the worst case complexity of our dual method improves over the best-known result for this problem in Padakandla and Sundaresan [SIAM J. Optimization, 20 (2009), pp. 1185-1204]. We then propose a gradient projection method to solve a more general class of problems in which the objective function is not necessarily separable. Numerical experiments show that both our algorithms work well in test problems.



قيم البحث

اقرأ أيضاً

The paper considers the minimization of a separable convex function subject to linear ascending constraints. The problem arises as the core optimization in several resource allocation scenarios, and is a special case of an optimization of a separable convex function over the bases of a polymatroid with a certain structure. The paper presents a survey of state-of-the-art algorithms that solve this optimization problem. The algorithms are applicable to the class of separable convex objective functions that need not be smooth or strictly convex. When the objective function is a so-called $d$-separable function, a simpler linear time algorithm solves the problem.
In this paper we present a new algorithmic realization of a projection-based scheme for general convex constrained optimization problem. The general idea is to transform the original optimization problem to a sequence of feasibility problems by itera tively constraining the objective function from above until the feasibility problem is inconsistent. For each of the feasibility problems one may apply any of the existing projection methods for solving it. In particular, the scheme allows the use of subgradient projections and does not require exact projections onto the constraints sets as in existing similar methods. We also apply the newly introduced concept of superiorization to optimization formulation and compare its performance to our scheme. We provide some numerical results for convex quadratic test problems as well as for real-life optimization problems coming from medical treatment planning.
A computationally efficient method to solve non-convex programming problems with linear equality constraints is presented. The proposed method is based on a recursively feasible and descending sequential convex programming procedure proven to converg e to a locally optimal solution. Assuming that the first convex problem in the sequence is feasible, these properties are obtained by convexifying the non-convex cost and inequality constraints with inner-convex approximations. Additionally, a computationally efficient method is introduced to obtain inner-convex approximations based on Taylor series expansions. These Taylor-based inner-convex approximations provide the overall algorithm with a quadratic rate of convergence. The proposed method is capable of solving problems of practical interest in real-time. This is illustrated with a numerical simulation of an aerial vehicle trajectory optimization problem on commercial-of-the-shelf embedded computers.
Optimization models with non-convex constraints arise in many tasks in machine learning, e.g., learning with fairness constraints or Neyman-Pearson classification with non-convex loss. Although many efficient methods have been developed with theoreti cal convergence guarantees for non-convex unconstrained problems, it remains a challenge to design provably efficient algorithms for problems with non-convex functional constraints. This paper proposes a class of subgradient methods for constrained optimization where the objective function and the constraint functions are are weakly convex. Our methods solve a sequence of strongly convex subproblems, where a proximal term is added to both the objective function and each constraint function. Each subproblem can be solved by various algorithms for strongly convex optimization. Under a uniform Slaters condition, we establish the computation complexities of our methods for finding a nearly stationary point.
Motivated by recent work of Renegar, we present new computational methods and associated computational guarantees for solving convex optimization problems using first-order methods. Our problem of interest is the general convex optimization problem $ f^* = min_{x in Q} f(x)$, where we presume knowledge of a strict lower bound $f_{mathrm{slb}} < f^*$. [Indeed, $f_{mathrm{slb}}$ is naturally known when optimizing many loss functions in statistics and machine learning (least-squares, logistic loss, exponential loss, total variation loss, etc.) as well as in Renegars transformed version of the standard conic optimization problem; in all these cases one has $f_{mathrm{slb}} = 0 < f^*$.] We introduce a new functional measure called the growth constant $G$ for $f(cdot)$, that measures how quickly the level sets of $f(cdot)$ grow relative to the function value, and that plays a fundamental role in the complexity analysis. When $f(cdot)$ is non-smooth, we present new computational guarantees for the Subgradient Descent Method and for smoothing methods, that can improve existing computational guarantees in several ways, most notably when the initial iterate $x^0$ is far from the optimal solution set. When $f(cdot)$ is smooth, we present a scheme for periodically restarting the Accelerated Gradient Method that can also improve existing computational guarantees when $x^0$ is far from the optimal solution set, and in the presence of added structure we present a scheme using parametrically increased smoothing that further improves the associated computational guarantees.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا