Do you want to publish a course? Click here

Low-rank optimization for semidefinite convex problems

159   0   0.0 ( 0 )
 Added by Rodolphe Sepulchre
 Publication date 2008
  fields
and research's language is English




Ask ChatGPT about the research

We propose an algorithm for solving nonlinear convex programs defined in terms of a symmetric positive semidefinite matrix variable $X$. This algorithm rests on the factorization $X=Y Y^T$, where the number of columns of Y fixes the rank of $X$. It is thus very effective for solving programs that have a low rank solution. The factorization $X=Y Y^T$ evokes a reformulation of the original problem as an optimization on a particular quotient manifold. The present paper discusses the geometry of that manifold and derives a second order optimization method. It furthermore provides some conditions on the rank of the factorization to ensure equivalence with the original problem. The efficiency of the proposed algorithm is illustrated on two applications: the maximal cut of a graph and the sparse principal component analysis problem.



rate research

Read More

In this paper, we show that the bundle method can be applied to solve semidefinite programming problems with a low rank solution without ever constructing a full matrix. To accomplish this, we use recent results from randomly sketching matrix optimization problems and from the analysis of bundle methods. Under strong duality and strict complementarity of SDP, our algorithm produces primal and the dual sequences converging in feasibility at a rate of $tilde{O}(1/epsilon)$ and in optimality at a rate of $tilde{O}(1/epsilon^2)$. Moreover, our algorithm outputs a low rank representation of its approximate solution with distance to the optimal solution at most $O(sqrt{epsilon})$ within $tilde{O}(1/epsilon^2)$ iterations.
We describe a modular rewriting system for translating optimization problems written in a domain-specific language to forms compatible with low-level solver interfaces. Translation is facilitated by reductions, which accept a category of problems and transform instances of that category to equivalent instances of another category. Our system proceeds in two key phases: analysis, in which we attempt to find a suitable solver for a supplied problem, and canonicalization, in which we rewrite the problem in the selected solvers standard form. We implement the described system in version 1.0 of CVXPY, a domain-specific language for mathematical and especially convex optimization. By treating reductions as first-class objects, our method makes it easy to match problems to solvers well-suited for them and to support solvers with a wide variety of standard forms.
We present new results on optimization problems where the involved functions are evenly convex. By means of a generalized conjugation scheme and the perturbation theory introduced by Rockafellar, we propose an alternative dual problem for a general optimization one defined on a separated locally convex topological space. Sufficient conditions for converse and total duality involving the even convexity of the perturbation function and $c$-subdifferentials are given. Formulae for the $c$-subdifferential and biconjugate of the objective function of a general optimization problem are provided, too. We also characterize the total duality also by means of the saddle-point theory for a notion of Lagrangian adapted to the considered framework.
In this study, we present a general framework of outer approximation algorithms to solve convex vector optimization problems, in which the Pascoletti-Serafini (PS) scalarization is solved iteratively. This scalarization finds the minimum distance from a reference point, which is usually taken as a vertex of the current outer approximation, to the upper image through a given direction. We propose efficient methods to select the parameters (the reference point and direction vector) of the PS scalarization and analyze the effects of these on the overall performance of the algorithm. Different from the existing vertex selection rules from the literature, the proposed methods do not require solving additional single-objective optimization problems. Using some test problems, we conduct an extensive computational study where three different measures are set as the stopping criteria: the approximation error, the runtime, and the cardinality of solution set. We observe that the proposed variants have satisfactory results especially in terms of runtime compared to the existing variants from the literature.
This paper considers a general convex constrained problem setting where functions are not assumed to be differentiable nor Lipschitz continuous. Our motivation is in finding a simple first-order method for solving a wide range of convex optimization problems with minimal requirements. We study the method of weighted dual averages (Nesterov, 2009) in this setting and prove that it is an optimal method.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا