ترغب بنشر مسار تعليمي؟ اضغط هنا

Disciplined Multi-Convex Programming

361   0   0.0 ( 0 )
 نشر من قبل Xinyue Shen
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

A multi-convex optimization problem is one in which the variables can be partitioned into sets over which the problem is convex when the other variables are fixed. Multi-convex problems are generally solved approximately using variations on alternating or cyclic minimization. Multi-convex problems arise in many applications, such as nonnegative matrix factorization, generalized low rank models, and structured control synthesis, to name just a few. In most applications to date the multi-convexity is simple to verify by hand. In this paper we study the automatic detection and verification of multi-convexity using the ideas of disciplined convex programming. We describe an implementation of our proposed method that detects and verifies multi-convexity, and then invokes one of the general solution methods.



قيم البحث

اقرأ أيضاً

In this paper we introduce disciplined convex-concave programming (DCCP), which combines the ideas of disciplined convex programming (DCP) with convex-concave programming (CCP). Convex-concave programming is an organized heuristic for solving nonconv ex problems that involve objective and constraint functions that are a sum of a convex and a concave term. DCP is a structured way to define convex optimization problems, based on a family of basic convex and concave functions and a few rules for combining them. Problems expressed using DCP can be automatically converted to standard form and solved by a generic solver; widely used implementations include YALMIP, CVX, CVXPY, and Convex.jl. In this paper we propose a framework that combines the two ideas, and includes two improvements over previously published work on convex-concave programming, specifically the handling of domains of the functions, and the issue of nondifferentiability on the boundary of the domains. We describe a Python implementation called DCCP, which extends CVXPY, and give examples.
We present a composition rule involving quasiconvex functions that generalizes the classical composition rule for convex functions. This rule complements well-known rules for the curvature of quasiconvex functions under increasing functions and point wise maximums. We refer to the class of optimization problems generated by these rules, along with a base set of quasiconvex and quasiconcave functions, as disciplined quasiconvex programs. Disciplined quasiconvex programming generalizes disciplined convex programming, the class of optimization problems targeted by most modern domain-specific languages for convex optimization. We describe an implementation of disciplined quasiconvex programming that makes it possible to specify and solve quasiconvex programs in CVXPY 1.0.
We introduce log-log convex programs, which are optimization problems with positive variables that become convex when the variables, objective functions, and constraint functions are replaced with their logs, which we refer to as a log-log transforma tion. This class of problems generalizes traditional geometric programming and generalized geometric programming, and it includes interesting problems involving nonnegative matrices. We give examples of log-log convex functions, some well-known and some less so, and we develop an analog of disciplined convex programming, which we call disciplined geometric programming. Disciplined geometric programming is a subclass of log-log convex programming generated by a composition rule and a set of functions with known curvature under the log-log transformation. Finally, we describe an implementation of disciplined geometric programming as a reduction in CVXPY 1.0.
We introduce Newton-ADMM, a method for fast conic optimization. The basic idea is to view the residuals of consecutive iterates generated by the alternating direction method of multipliers (ADMM) as a set of fixed point equations, and then use a nons mooth Newton method to find a solution; we apply the basic idea to the Splitting Cone Solver (SCS), a state-of-the-art method for solving generic conic optimization problems. We demonstrate theoretically, by extending the theory of semismooth operators, that Newton-ADMM converges rapidly (i.e., quadratically) to a solution; empirically, Newton-ADMM is significantly faster than SCS on a number of problems. The method also has essentially no tuning parameters, generates certificates of primal or dual infeasibility, when appropriate, and can be specialized to solve specific convex problems.
222 - Liwei Zhang , Yule Zhang , Jia Wu 2019
This paper considers the problem of minimizing a convex expectation function over a closed convex set, coupled with a set of inequality convex expectation constraints. We present a new stochastic approximation type algorithm, namely the stochastic ap proximation proximal method of multipliers (PMMSopt) to solve this convex stochastic optimization problem. We analyze regrets of a stochastic approximation proximal method of multipliers for solving convex stochastic optimization problems. Under mild conditions, we show that this algorithm exhibits ${rm O}(T^{-1/2})$ rate of convergence, in terms of both optimality gap and constraint violation if parameters in the algorithm are properly chosen, when the objective and constraint functions are generally convex, where $T$ denotes the number of iterations. Moreover, we show that, with at least $1-e^{-T^{1/4}}$ probability, the algorithm has no more than ${rm O}(T^{-1/4})$ objective regret and no more than ${rm O}(T^{-1/8})$ constraint violation regret. To the best of our knowledge, this is the first time that such a proximal method for solving expectation constrained stochastic optimization is presented in the literature.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا