Do you want to publish a course? Click here

Sequential Linearization Method for Bound-Constrained Mathematical Programs with Complementarity Constraints

261   0   0.0 ( 0 )
 Added by Jeffrey Larson
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

We propose an algorithm for solving bound-constrained mathematical programs with complementarity constraints on the variables. Each iteration of the algorithm involves solving a linear program with complementarity constraints in order to obtain an estimate of the active set. The algorithm enforces descent on the objective function to promote global convergence to B-stationary points. We provide a convergence analysis and preliminary numerical results on a range of test problems. We also study the effect of fixing the active constraints in a bound-constrained quadratic program that can be solved on each iteration in order to obtain fast convergence.



rate research

Read More

108 - Helmut Gfrerer , Jane J. Ye 2019
In this paper, we study the mathematical program with equilibrium constraints (MPEC) formulated as a mathematical program with a parametric generalized equation involving the regular normal cone. We derive a new necessary optimality condition which is sharper than the usual M-stationary condition and is applicable even when no constraint qualifications hold for the corresponding mathematical program with complementarity constraints (MPCC) reformulation.
In this paper, we provide an elementary, geometric, and unified framework to analyze conic programs that we call the strict complementarity approach. This framework allows us to establish error bounds and quantify the sensitivity of the solution. The framework uses three classical ideas from convex geometry and linear algebra: linear regularity of convex sets, facial reduction, and orthogonal decomposition. We show how to use this framework to derive error bounds for linear programming (LP), second order cone programming (SOCP), and semidefinite programming (SDP).
107 - Yonggui Yan , Yangyang Xu 2020
Stochastic gradient methods (SGMs) have been widely used for solving stochastic optimization problems. A majority of existing works assume no constraints or easy-to-project constraints. In this paper, we consider convex stochastic optimization problems with expectation constraints. For these problems, it is often extremely expensive to perform projection onto the feasible set. Several SGMs in the literature can be applied to solve the expectation-constrained stochastic problems. We propose a novel primal-dual type SGM based on the Lagrangian function. Different from existing methods, our method incorporates an adaptiveness technique to speed up convergence. At each iteration, our method inquires an unbiased stochastic subgradient of the Lagrangian function, and then it renews the primal variables by an adaptive-SGM update and the dual variables by a vanilla-SGM update. We show that the proposed method has a convergence rate of $O(1/sqrt{k})$ in terms of the objective error and the constraint violation. Although the convergence rate is the same as those of existing SGMs, we observe its significantly faster convergence than an existing non-adaptive primal-dual SGM and a primal SGM on solving the Neyman-Pearson classification and quadratically constrained quadratic programs. Furthermore, we modify the proposed method to solve convex-concave stochastic minimax problems, for which we perform adaptive-SGM updates to both primal and dual variables. A convergence rate of $O(1/sqrt{k})$ is also established to the modified method for solving minimax problems in terms of primal-dual gap.
We propose a sigmoidal approximation for the value-at-risk (that we call SigVaR) and we use this approximation to tackle nonlinear programs (NLPs) with chance constraints. We prove that the approximation is conservative and that the level of conservatism can be made arbitrarily small for limiting parameter values. The SigVar approximation brings scalability benefits over exact mixed-integer reformulations because its sample average approximation can be cast as a standard NLP. We also establish explicit connections between SigVaR and other smooth sigmoidal approximations recently reported in the literature. We show that a key benefit of SigVaR over such approximations is that one can establish an explicit connection with the conditional value at risk (CVaR) approximation and exploit this connection to obtain initial guesses for the approximation parameters. We present small- and large-scale numerical studies to illustrate the developments.
89 - Youwei Liang 2020
An important method to optimize a function on standard simplex is the active set algorithm, which requires the gradient of the function to be projected onto a hyperplane, with sign constraints on the variables that lie in the boundary of the simplex. We propose a new algorithm to efficiently project the gradient for this purpose. Furthermore, we apply the proposed gradient projection method to quadratic programs (QP) with standard simplex constraints, where gradient projection is used to explore the feasible region and, when we believe the optimal active set is identified, we switch to constrained conjugate gradient to accelerate convergence. Specifically, two different directions of gradient projection are used to explore the simplex, namely, the projected gradient and the reduced gradient. We choose one of the two directions according to the angle between the directions. Moreover, we propose two conditions for guessing the optimal active set heuristically. The first condition is that the working set remains unchanged for many iterations, and the second condition is that the angle between the projected gradient and the reduced gradient is small enough. Based on these strategies, a new active set algorithm for solving quadratic programs on standard simplex is proposed.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا