Do you want to publish a course? Click here

A proximal DC approach for quadratic assignment problem

51   0   0.0 ( 0 )
 Added by Chao Ding
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

In this paper, we show that the quadratic assignment problem (QAP) can be reformulated to an equivalent rank constrained doubly nonnegative (DNN) problem. Under the framework of the difference of convex functions (DC) approach, a semi-proximal DC algorithm (DCA) is proposed for solving the relaxation of the rank constrained DNN problem whose subproblems can be solved by the semi-proximal augmented Lagrangian method (sPALM). We show that the generated sequence converges to a stationary point of the corresponding DC problem, which is feasible to the rank constrained DNN problem. Moreover, numerical experiments demonstrate that for most QAP instances, the proposed approach can find the global optimal solutions efficiently, and for others, the proposed algorithm is able to provide good feasible solutions in a reasonable time.



rate research

Read More

In this paper, we aim to solve high dimensional convex quadratic programming (QP) problems with a large number of quadratic terms, linear equality and inequality constraints. In order to solve the targeted problems to a desired accuracy efficiently, we develop a two-phase proximal augmented Lagrangian method, with Phase I to generate a reasonably good initial point to warm start Phase II to obtain an accurate solution efficiently. More specifically, in Phase I, based on the recently developed symmetric Gauss-Seidel (sGS) decomposition technique, we design a novel sGS based semi-proximal augmented Lagrangian method for the purpose of finding a solution of low to medium accuracy. Then, in Phase II, a proximal augmented Lagrangian algorithm is proposed to obtain a more accurate solution efficiently. Extensive numerical results evaluating the performance of our proposed algorithm against the highly optimized commercial solver Gurobi and the open source solver OSQP are presented to demonstrate the high efficiency and robustness of our proposed algorithm for solving various classes of large-scale convex QP problems.
This paper is concerned with a class of zero-norm regularized piecewise linear-quadratic (PLQ) composite minimization problems, which covers the zero-norm regularized $ell_1$-loss minimization problem as a special case. For this class of nonconvex nonsmooth problems, we show that its equivalent MPEC reformulation is partially calm on the set of global optima and make use of this property to derive a family of equivalent DC surrogates. Then, we propose a proximal majorization-minimization (MM) method, a convex relaxation approach not in the DC algorithm framework, for solving one of the DC surrogates which is a semiconvex PLQ minimization problem involving three nonsmooth terms. For this method, we establish its global convergence and linear rate of convergence, and under suitable conditions show that the limit of the generated sequence is not only a local optimum but also a good critical point in a statistical sense. Numerical experiments are conducted with synthetic and real data for the proximal MM method with the subproblems solved by a dual semismooth Newton method to confirm our theoretical findings, and numerical comparisons with a convergent indefinite-proximal ADMM for the partially smoothed DC surrogate verify its superiority in the quality of solutions and computing time.
113 - Javad Mohammadi , Soummya Kar , 2014
The trend in the electric power system is to move towards increased amounts of distributed resources which suggests a transition from the current highly centralized to a more distributed control structure. In this paper, we propose a method which enables a fully distributed solution of the DC Optimal Power Flow problem (DC-OPF), i.e. the generation settings which minimize cost while supplying the load and ensuring that all line flows are below their limits are determined in a distributed fashion. The approach consists of a distributed procedure that aims at solving the first order optimality conditions in which individual bus optimization variables are iteratively updated through simple local computations and information is exchanged with neighboring entities. In particular, the update for a specific bus consists of a term which takes into account the coupling between the neighboring Lagrange multiplier variables and a local innovation term that enforces the demand/supply balance. The buses exchange information on the current update of their multipliers and the bus angle with their neighboring buses. An analytical proof is given that the proposed method converges to the optimal solution of the DC-OPF. Also, the performance is evaluated using the IEEE Reliability Test System as a test case.
In this paper, the optimization problem of the supervised distance preserving projection (SDPP) for data dimension reduction (DR) is considered, which is equivalent to a rank constrained least squares semidefinite programming (RCLSSDP). In order to overcome the difficulties caused by rank constraint, the difference-of-convex (DC) regularization strategy was employed, then the RCLSSDP is transferred into a series of least squares semidefinite programming with DC regularization (DCLSSDP). An inexact proximal DC algorithm with sieving strategy (s-iPDCA) is proposed for solving the DCLSSDP, whose subproblems are solved by the accelerated block coordinate descent (ABCD) method. Convergence analysis shows that the generated sequence of s-iPDCA globally converges to stationary points of the corresponding DC problem. To show the efficiency of our proposed algorithm for solving the RCLSSDP, the s-iPDCA is compared with classical proximal DC algorithm (PDCA) and the PDCA with extrapolation (PDCAe) by performing DR experiment on the COIL-20 database, the results show that the s-iPDCA outperforms the PDCA and the PDCAe in solving efficiency. Moreover, DR experiments for face recognition on the ORL database and the YaleB database demonstrate that the rank constrained kernel SDPP (RCKSDPP) is effective and competitive by comparing the recognition accuracy with kernel semidefinite SDPP (KSSDPP) and kernal principal component analysis (KPCA).
Optimizing with group sparsity is significant in enhancing model interpretability in machining learning applications, e.g., feature selection, compressed sensing and model compression. However, for large-scale stochastic training problems, effective group sparsity exploration are typically hard to achieve. Particularly, the state-of-the-art stochastic optimization algorithms usually generate merely dense solutions. To overcome this shortage, we propose a stochastic method -- Half-space Stochastic Projected Gradient (HSPG) method to search solutions of high group sparsity while maintain the convergence. Initialized by a simple Prox-SG Step, the HSPG method relies on a novel Half-Space Step to substantially boost the sparsity level. Numerically, HSPG demonstrates its superiority in deep neural networks, e.g., VGG16, ResNet18 and MobileNetV1, by computing solutions of higher group sparsity, competitive objective values and generalization accuracy.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا