Do you want to publish a course? Click here

Interior-point methods for unconstrained geometric programming and scaling problems

64   0   0.0 ( 0 )
 Added by Harold Nieuwboer
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We provide a condition-based analysis of two interior-point methods for unconstrained geometric programs, a class of convex programs that arise naturally in applications including matrix scaling, matrix balancing, and entropy maximization. Our condition numbers are natural geometric quantities associated with the Newton polytope of the geometric program, and lead to diameter bounds on approximate minimizers. We also provide effective bounds on the condition numbers both in general and under combinatorial assumptions on the Newton polytope. In this way, we generalize the iteration complexity of recent interior-point methods for matrix scaling and matrix balancing. Recently, there has been much work on algorithms for certain optimization problems on Lie groups, known as capacity and scaling problems. For commutative groups, these problems reduce to unconstrained geometric programs, which serves as a particular source of motivation for our work.



rate research

Read More

We propose a primal-dual interior-point method (IPM) with convergence to second-order stationary points (SOSPs) of nonlinear semidefinite optimization problems, abbreviated as NSDPs. As far as we know, the current algorithms for NSDPs only ensure convergence to first-order stationary points such as Karush-Kuhn-Tucker points. The proposed method generates a sequence approximating SOSPs while minimizing a primal-dual merit function for NSDPs by using scaled gradient directions and directions of negative curvature. Under some assumptions, the generated sequence accumulates at an SOSP with a worst-case iteration complexity. This result is also obtained for a primal IPM with slight modification. Finally, our numerical experiments show the benefits of using directions of negative curvature in the proposed method.
We introduce primal and dual stochastic gradient oracle methods for decentralized convex optimization problems. Both for primal and dual oracles, the proposed methods are optimal in terms of the number of communication steps. However, for all classes of the objective, the optimality in terms of the number of oracle calls per node takes place only up to a logarithmic factor and the notion of smoothness. By using mini-batching technique, we show that the proposed methods with stochastic oracle can be additionally parallelized at each node. The considered algorithms can be applied to many data science problems and inverse problems.
113 - Yangyang Xu 2020
First-order methods (FOMs) have recently been applied and analyzed for solving problems with complicated functional constraints. Existing works show that FOMs for functional constrained problems have lower-order convergence rates than those for unconstrained problems. In particular, an FOM for a smooth strongly-convex problem can have linear convergence, while it can only converge sublinearly for a constrained problem if the projection onto the constraint set is prohibited. In this paper, we point out that the slower convergence is caused by the large number of functional constraints but not the constraints themselves. When there are only $m=O(1)$ functional constraints, we show that an FOM can have almost the same convergence rate as that for solving an unconstrained problem, even without the projection onto the feasible set. In addition, given an $varepsilon>0$, we show that a complexity result that is better than a lower bound can be obtained, if there are only $m=o(varepsilon^{-frac{1}{2}})$ functional constraints. Our result is surprising but does not contradict to the existing lower complexity bound, because we focus on a specific subclass of problems. Experimental results on quadratically-constrained quadratic programs demonstrate our theory.
We introduce and study conic geometric programs (CGPs), which are convex optimization problems that unify geometric programs (GPs) and conic optimization problems such as semidefinite programs (SDPs). A CGP consists of a linear objective function that is to be minimized subject to affine constraints, convex conic constraints, and upper bound constraints on sums of exponential and affine functions. The conic constraints are the central feature of conic programs such as SDPs, while upper bounds on combined exponential/affine functions are generalizations of the types of constraints found in GPs. The dual of a CGP involves the maximization of the negative relative entropy between two nonnegative vectors jointly, subject to affine and conic constraints on the two vectors. Although CGPs contain GPs and SDPs as special instances, computing global optima of CGPs is not much harder than solving GPs and SDPs. More broadly, the CGP framework facilitates a range of new applications that fall outside the scope of SDPs and GPs. Specifically, we demonstrate the utility of CGPs in providing solutions to problems such as permanent maximization, hitting-time estimation in dynamical systems, the computation of the capacity of channels transmitting quantum information, and robust optimization formulations of GPs.
We introduce log-log convex programs, which are optimization problems with positive variables that become convex when the variables, objective functions, and constraint functions are replaced with their logs, which we refer to as a log-log transformation. This class of problems generalizes traditional geometric programming and generalized geometric programming, and it includes interesting problems involving nonnegative matrices. We give examples of log-log convex functions, some well-known and some less so, and we develop an analog of disciplined convex programming, which we call disciplined geometric programming. Disciplined geometric programming is a subclass of log-log convex programming generated by a composition rule and a set of functions with known curvature under the log-log transformation. Finally, we describe an implementation of disciplined geometric programming as a reduction in CVXPY 1.0.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا