ترغب بنشر مسار تعليمي؟ اضغط هنا

Gradient Projection for Solving Quadratic Programs with Standard Simplex Constraints

90   0   0.0 ( 0 )
 نشر من قبل Youwei Liang
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English
 تأليف Youwei Liang




اسأل ChatGPT حول البحث

An important method to optimize a function on standard simplex is the active set algorithm, which requires the gradient of the function to be projected onto a hyperplane, with sign constraints on the variables that lie in the boundary of the simplex. We propose a new algorithm to efficiently project the gradient for this purpose. Furthermore, we apply the proposed gradient projection method to quadratic programs (QP) with standard simplex constraints, where gradient projection is used to explore the feasible region and, when we believe the optimal active set is identified, we switch to constrained conjugate gradient to accelerate convergence. Specifically, two different directions of gradient projection are used to explore the simplex, namely, the projected gradient and the reduced gradient. We choose one of the two directions according to the angle between the directions. Moreover, we propose two conditions for guessing the optimal active set heuristically. The first condition is that the working set remains unchanged for many iterations, and the second condition is that the angle between the projected gradient and the reduced gradient is small enough. Based on these strategies, a new active set algorithm for solving quadratic programs on standard simplex is proposed.

قيم البحث

اقرأ أيضاً

157 - Yunmei Chen , Xiaojing Ye 2011
This mini-paper presents a fast and simple algorithm to compute the projection onto the canonical simplex $triangle^n$. Utilizing the Moreaus identity, we show that the problem is essentially a univariate minimization and the objective function is st rictly convex and continuously differentiable. Moreover, it is shown that there are at most n candidates which can be computed explicitly, and the minimizer is the only one that falls into the correct interval.
Quadratic optimization problems (QPs) are ubiquitous, and solution algorithms have matured to a reliable technology. However, the precision of solutions is usually limited due to the underlying floating-point operations. This may cause inconveniences when solutions are used for rigorous reasoning. We contribute on three levels to overcome this issue. First, we present a novel refinement algorithm to solve QPs to arbitrary precision. It iteratively solves refined QPs, assuming a floating-point QP solver oracle. We prove linear convergence of residuals and primal errors. Second, we provide an efficient implementation, based on SoPlex and qpOASES that is publicly available in source code. Third, we give precise reference solutions for the Maros and Meszaros benchmark library.
Small-scale Mixed-Integer Quadratic Programming (MIQP) problems often arise in embedded control and estimation applications. Driven by the need for algorithmic simplicity to target computing platforms with limited memory and computing resources, this paper proposes a few approaches to solving MIQPs, either to optimality or suboptimally. We specialize an existing Accelerated Dual Gradient Projection (GPAD) algorithm to effectively solve the Quadratic Programming (QP) relaxation that arise during Branch and Bound (B&B) and propose a generic framework to warm-start the binary variables which reduces the number of QP relaxations. Moreover, in order to find an integer feasible combination of the binary variables upfront, two heuristic approaches are presented: ($i$) without using B&B, and ($ii$) using B&B with a significantly reduced number of QP relaxations. Both heuristic approaches return an integer feasible solution that may be suboptimal but involve a much reduced computation effort. Such a feasible solution can be either implemented directly or used to set an initial upper bound on the optimal cost in B&B. Through different hybrid control and estimation examples involving binary decision variables, we show that the performance of the proposed methods, although very simple to code, is comparable to that of state-of-the-art MIQP solvers.
The goal of this paper is to study approaches to bridge the gap between first-order and second-order type methods for composite convex programs. Our key observations are: i) Many well-known operator splitting methods, such as forward-backward splitti ng (FBS) and Douglas-Rachford splitting (DRS), actually define a fixed-point mapping; ii) The optimal solutions of the composite convex program and the solutions of a system of nonlinear equations derived from the fixed-point mapping are equivalent. Solving this kind of system of nonlinear equations enables us to develop second-order type methods. Although these nonlinear equations may be non-differentiable, they are often semi-smooth and their generalized Jacobian matrix is positive semidefinite due to monotonicity. By combining with a regularization approach and a known hyperplane projection technique, we propose an adaptive semi-smooth Newton method and establish its convergence to global optimality. Preliminary numerical results on $ell_1$-minimization problems demonstrate that our second-order type algorithms are able to achieve superlinear or quadratic convergence.
Motivated by a growing list of nontraditional statistical estimation problems of the piecewise kind, this paper provides a survey of known results supplemented with new results for the class of piecewise linear-quadratic programs. These are linearly constrained optimization problems with piecewise linear-quadratic (PLQ) objective functions. Starting from a study of the representation of such a function in terms of a family of elementary functions consisting of squared affine functions, squared plus-composite-affine functions, and affine functions themselves, we summarize some local properties of a PLQ function in terms of their first and second-order directional derivatives. We extend some well-known necessary and sufficient second-order conditions for local optimality of a quadratic program to a PLQ program and provide a dozen such equivalent conditions for strong, strict, and isolated local optimality, showing in particular that a PLQ program has the same characterizations for local minimality as a standard quadratic program. As a consequence of one such condition, we show that the number of strong, strict, or isolated local minima of a PLQ program is finite; this result supplements a recent result about the finite number of directional stationary objective values. Interestingly, these finiteness results can be uncovered by invoking a very powerful property of subanalytic functions; our proof is fairly elementary, however. We discuss applications of PLQ programs in some modern statistical estimation problems. These problems lead to a special class of unconstrained composite programs involving the non-differentiable $ell_1$-function, for which we show that the task of verifying the second-order stationary condition can be converted to the problem of checking the copositivity of certain Schur complement on the nonnegative orthant.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا