ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimization with Least Constraint Violation

143   0   0.0 ( 0 )
 نشر من قبل Yu-Hong Dai
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Study about theory and algorithms for constrained optimization usually assumes that the feasible region of the optimization problem is nonempty. However, there are many important practical optimization problems whose feasible regions are not known to be nonempty or not, and optimizers of the objective function with the least constraint violation prefer to be found. A natural way for dealing with these problems is to extend the constrained optimization problem as the one optimizing the objective function over the set of points with the least constraint violation. Firstly, the minimization problem with least constraint violation is proved to be an Lipschitz equality constrained optimization problem when the original problem is a convex optimization problem with possible inconsistent conic constraints, and it can be reformulated as an MPEC problem. Secondly, for nonlinear programming problems with possible inconsistent constraints, various types of stationary points are presented for the MPCC problem which is equivalent to the minimization problem with least constraint violation, and an elegant necessary optimality condition, named as L-stationary condition, is established from the classical optimality theory of Lipschitz continuous optimization. Finally, the smoothing Fischer-Burmeister function method for nonlinear programming case is constructed for solving the problem minimizing the objective function with the least constraint violation. It is demonstrated that, when the positive smoothing parameter approaches to zero, any point in the outer limit of the KKT-point mapping is an L-stationary point of the equivalent MPCC problem.



قيم البحث

اقرأ أيضاً

Gradient-free optimization methods, such as surrogate based optimization (SBO) methods, and genetic (GAs), or evolutionary (EAs) algorithms have gained popularity in the field of constrained optimization of expensive black-box functions. However, con straint-handling methods, by both classes of solvers, do not usually guarantee strictly feasible candidates during optimization. This can become an issue in applied engineering problems where design variables must remain feasible for simulations to not fail. We propose a constraint-handling method for computationally inexpensive constraint functions which guarantees strictly feasible candidates when using a surrogate-based optimizer. We compare our method to other SBO, GA/EA and gradient-based algorithms on two (relatively simple and relatively hard) analytical test functions, and an applied fully-resolved Computational Fluid Dynamics (CFD) problem concerned with optimization of an undulatory swimming of a fish-like body, and show that the proposed algorithm shows favorable results while guaranteeing feasible candidates.
We conduct a study and comparison of superiorization and optimization approaches for the reconstruction problem of superiorized/regularized least-squares solutions of underdetermined linear equations with nonnegativity variable bounds. Regarding supe riorization, the state of the art is examined for this problem class, and a novel approach is proposed that employs proximal mappings and is structurally similar to the established forward-backward optimization approach. Regarding convex optimization, accelerated forward-backward splitting with inexact proximal maps is worked out and applied to both the natural splitting least-squares term/regularizer and to the reverse splitting regularizer/least-squares term. Our numerical findings suggest that superiorization can approach the solution of the optimization problem and leads to comparable results at significantly lower costs, after appropriate parameter tuning. On the other hand, applying accelerated forward-backward optimization to the reverse splitting slightly outperforms superiorization, which suggests that convex optimization can approach superiorization too, using a suitable problem splitting.
A framework is proposed for solving general convex quadratic programs (CQPs) from an infeasible starting point by invoking an existing feasible-start algorithm tailored for inequality-constrained CQPs. The central tool is an exact penalty function sc heme equipped with a penalty-parameter updating rule. The feasible-start algorithm merely has to satisfy certain general requirements, and so is the updating rule. Under mild assumptions, the framework is proved to converge on CQPs with both inequality and equality constraints and, at a negligible additional cost per iteration, produces an infeasibility certificate, together with a feasible point for an (approximately) $ell_1$-least relaxed feasible problem when the given problem does not have a feasible solution. The framework is applied to a feasible-start constraint-reduced interior-point algorithm previously proved to be highly performant on problems with many more constraints than variables (imbalanced). Numerical comparison with popular codes (SDPT3, SeDuMi, MOSEK) is reported on both randomly generated problems and support-vector machine classifier training problems. The results show that the former typically outperforms the latter on imbalanced problems.
One revisits the standard saddle-point method based on conjugate duality for solving convex minimization problems. Our aim is to reduce or remove unnecessary topological restrictions on the constraint set. Dual equalities and characterizations of the minimizers are obtained with weak or without constraint qualifications. The main idea is to work with intrinsic topologies which reflect some geometry of the objective function. The abstract results of this article are applied in other papers to the Monge-Kantorovich optimal transport problem and the minimization of entropy functionals.
156 - Lu Sitong , Li Qinana 2021
Support vector machine is an important and fundamental technique in machine learning. Soft-margin SVM models have stronger generalization performance compared with the hard-margin SVM. Most existing works use the hinge-loss function which can be rega rded as an upper bound of the 0-1 loss function. However, it can not explicitly limit the number of misclassified samples. In this paper, we use the idea of soft-margin SVM and propose a new SVM model with a sparse constraint. Our model can strictly limit the number of misclassified samples, expressing the soft-margin constraint as a sparse constraint. By constructing a majorization function, a majorization penalty method can be used to solve the sparse-constrained optimization problem. We apply Conjugate-Gradient (CG) method to solve the resulting subproblem. Extensive numerical results demonstrate the impressive performance of the proposed majorization penalty method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا