ترغب بنشر مسار تعليمي؟ اضغط هنا

Directional quasi/pseudo-normality as sufficient conditions for metric subregularity

141   0   0.0 ( 0 )
 نشر من قبل Jane Ye
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we study sufficient conditions for metric subregularity of a set-valued map which is the sum of a single-valued continuous map and a locally closed subset. First we derive a sufficient condition for metric subregularity which is weaker than the so-called first-order sufficient condition for metric subregularity (FOSCMS) by adding an extra sequential condition. Then we introduce a directional version of the quasi-normality and the pseudo-normality which is stronger than the new {weak} sufficient condition for metric subregularity but is weaker than the classical quasi-normality and pseudo-normality respectively. Moreover we introduce a nonsmooth version of the second-order sufficient condition for metric subregularity and show that it is a sufficient condition for the new sufficient condition for metric {sub}regularity to hold. An example is used to illustrate that the directional pseduo-normality can be weaker than FOSCMS. For the class of set-valued maps where the single-valued mapping is affine and the abstract set is the union of finitely many convex polyhedral sets, we show that the pseudo-normality and hence the directional pseudo-normality holds automatically at each point of the graph. Finally we apply our results to the complementarity and the Karush-Kuhn-Tucker systems.



قيم البحث

اقرأ أيضاً

115 - D. Leventhal 2009
We examine the linear convergence rates of variants of the proximal point method for finding zeros of maximal monotone operators. We begin by showing how metric subregularity is sufficient for linear convergence to a zero of a maximal monotone operat or. This result is then generalized to obtain convergence rates for the problem of finding a common zero of multiple monotone operators by considering randomized and averaged proximal methods.
Mixed monotone systems form an important class of nonlinear systems that have recently received attention in the abstraction-based control design area. Slightly different definitions exist in the literature, and it remains a challenge to verify mixed monotonicity of a system in general. In this paper, we first clarify the relation between different existing definitions of mixed monotone systems, and then give two sufficient conditions for mixed monotone functions defined on Euclidean space. These sufficient conditions are more general than the ones from the existing control literature, and they suggest that mixed monotonicity is a very generic property. Some discussions are provided on the computational usefulness of the proposed sufficient conditions.
135 - Kuang Bai , Jane Ye 2020
The bilevel program is an optimization problem where the constraint involves solutions to a parametric optimization problem. It is well-known that the value function reformulation provides an equivalent single-level optimization problem but it result s in a nonsmooth optimization problem which never satisfies the usual constraint qualification such as the Mangasarian-Fromovitz constraint qualification (MFCQ). In this paper we show that even the first order sufficient condition for metric subregularity (which is in general weaker than MFCQ) fails at each feasible point of the bilevel program. We introduce the concept of directional calmness condition and show that under {the} directional calmness condition, the directional necessary optimality condition holds. {While the directional optimality condition is in general sharper than the non-directional one,} the directional calmness condition is in general weaker than the classical calmness condition and hence is more likely to hold. {We perform the directional sensitivity analysis of the value function and} propose the directional quasi-normality as a sufficient condition for the directional calmness. An example is given to show that the directional quasi-normality condition may hold for the bilevel program.
Convergence of the gradient descent algorithm has been attracting renewed interest due to its utility in deep learning applications. Even as multiple variants of gradient descent were proposed, the assumption that the gradient of the objective is Lip schitz continuous remained an integral part of the analysis until recently. In this work, we look at convergence analysis by focusing on a property that we term as concavifiability, instead of Lipschitz continuity of gradients. We show that concavifiability is a necessary and sufficient condition to satisfy the upper quadratic approximation which is key in proving that the objective function decreases after every gradient descent update. We also show that any gradient Lipschitz function satisfies concavifiability. A constant known as the concavifier analogous to the gradient Lipschitz constant is derived which is indicative of the optimal step size. As an application, we demonstrate the utility of finding the concavifier the in convergence of gradient descent through an example inspired by neural networks. We derive bounds on the concavifier to obtain a fixed step size for a single hidden layer ReLU network.
Minimizing the rank of a matrix subject to constraints is a challenging problem that arises in many applications in control theory, machine learning, and discrete geometry. This class of optimization problems, known as rank minimization, is NP-HARD, and for most practical problems there are no efficient algorithms that yield exact solutions. A popular heuristic algorithm replaces the rank function with the nuclear norm--equal to the sum of the singular values--of the decision variable. In this paper, we provide a necessary and sufficient condition that quantifies when this heuristic successfully finds the minimum rank solution of a linear constraint set. We additionally provide a probability distribution over instances of the affine rank minimization problem such that instances sampled from this distribution satisfy our conditions for success with overwhelming probability provided the number of constraints is appropriately large. Finally, we give empirical evidence that these probabilistic bounds provide accurate predictions of the heuristics performance in non-asymptotic scenarios.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا