ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimality Conditions for Constrained Minimax Optimization

117   0   0.0 ( 0 )
 نشر من قبل Yu-Hong Dai
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Minimax optimization problems arises from both modern machine learning including generative adversarial networks, adversarial training and multi-agent reinforcement learning, as well as from tradition research areas such as saddle point problems, numerical partial differential equations and optimality conditions of equality constrained optimization. For the unconstrained continuous nonconvex-nonconcave situation, Jin, Netrapalli and Jordan (2019) carefully considered the very basic question: what is a proper definition of local optima of a minimax optimization problem, and proposed a proper definition of local optimality called local minimax. We shall extend the definition of local minimax point to constrained nonconvex-nonconcave minimax optimization problems. By analyzing Jacobian uniqueness conditions for the lower-level maximization problem and the strong regularity of Karush-Kuhn-Tucker conditions of the maximization problem, we provide both necessary optimality conditions and sufficient optimality conditions for the local minimax points of constrained minimax optimization problems.



قيم البحث

اقرأ أيضاً

In this paper we study second-order optimality conditions for non-convex set-constrained optimization problems. For a convex set-constrained optimization problem, it is well-known that second-order optimality conditions involve the support function o f the second-order tangent set. In this paper we propose two approaches for establishing second-order optimality conditions for the non-convex case. In the first approach we extend the concept of the support function so that it is applicable to general non-convex set-constrained problems, whereas in the second approach we introduce the notion of the directional regular tangent cone and apply classical results of convex duality theory. Besides the second-order optimality conditions, the novelty of our approach lies in the systematic introduction and use, respectively, of direction
Hidden convex optimization is such a class of nonconvex optimization problems that can be globally solved in polynomial time via equivalent convex programming reformulations. In this paper, we focus on checking local optimality in hidden convex optim ization. We first introduce a class of hidden convex optimization problems by jointing the classical nonconvex trust-region subproblem (TRS) with convex optimization (CO), and then present a comprehensive study on local optimality conditions. In order to guarantee the existence of a necessary and sufficient condition for local optimality, we need more restrictive assumptions. To our surprise, while (TRS) has at most one local non-global minimizer and (CO) has no local non-global minimizer, their joint problem could have more than one local non-global minimizer.
135 - Kuang Bai , Jane Ye 2020
The bilevel program is an optimization problem where the constraint involves solutions to a parametric optimization problem. It is well-known that the value function reformulation provides an equivalent single-level optimization problem but it result s in a nonsmooth optimization problem which never satisfies the usual constraint qualification such as the Mangasarian-Fromovitz constraint qualification (MFCQ). In this paper we show that even the first order sufficient condition for metric subregularity (which is in general weaker than MFCQ) fails at each feasible point of the bilevel program. We introduce the concept of directional calmness condition and show that under {the} directional calmness condition, the directional necessary optimality condition holds. {While the directional optimality condition is in general sharper than the non-directional one,} the directional calmness condition is in general weaker than the classical calmness condition and hence is more likely to hold. {We perform the directional sensitivity analysis of the value function and} propose the directional quasi-normality as a sufficient condition for the directional calmness. An example is given to show that the directional quasi-normality condition may hold for the bilevel program.
We prove the following uniqueness result for the buckling plate. Assume there exists a smooth domain which minimizes the first buckling eigenvalue for a plate among all smooth domains of given volume. Then the domain must be a ball. The proof uses th e second variation for the buckling eigenvalue and an inequality by L. E. Payne to establish this result.
70 - Wenbo Gao , Donald Goldfarb , 2018
We expand the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain verifiable assumptions, converges to the set of constrai ned stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large. When the Kurdyka-L{}ojasiewicz (K-L{}) property holds, this is strengthened to convergence to a single constrained stationary point. Our analysis applies under assumptions that we have endeavored to make as weak as possible. It applies to problems that involve nonconvex and/or nonsmooth objective terms, in addition to the multiaffine constraints that can involve multiple (three or more) blocks of variables. To illustrate the applicability of our results, we describe examples including nonnegative matrix factorization, sparse learning, risk parity portfolio selection, nonconvex formulations of convex problems, and neural network training. In each case, our ADMM approach encounters only subproblems that have closed-form solutions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا