ترغب بنشر مسار تعليمي؟ اضغط هنا

Trust-region and $p$-regularized subproblems: local nonglobal minimum is the second smallest objective function value among all first-order stationary points

84   0   0.0 ( 0 )
 نشر من قبل Yong Xia
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The local nonglobal minimizer of trust-region subproblem, if it exists, is shown to have the second smallest objective function value among all KKT points. This new property is extended to $p$-regularized subproblem. As a corollary, we show for the first time that finding the local nonglobal minimizer of Nesterov-Polyak subproblem corresponds to a generalized eigenvalue problem.



قيم البحث

اقرأ أيضاً

We establish lower bounds on the complexity of finding $epsilon$-stationary points of smooth, non-convex high-dimensional functions using first-order methods. We prove that deterministic first-order methods, even applied to arbitrarily smooth functio ns, cannot achieve convergence rates in $epsilon$ better than $epsilon^{-8/5}$, which is within $epsilon^{-1/15}logfrac{1}{epsilon}$ of the best known rate for such methods. Moreover, for functions with Lipschitz first and second derivatives, we prove no deterministic first-order method can achieve convergence rates better than $epsilon^{-12/7}$, while $epsilon^{-2}$ is a lower bound for functions with only Lipschitz gradient. For convex functions with Lipschitz gradient, accelerated gradient descent achieves the rate $epsilon^{-1}logfrac{1}{epsilon}$, showing that finding stationary points is easier given convexity.
265 - K.Ito , T.Takeuchi 2009
The minimum value function appearing in Tikhonov regularization technique is very useful in determining the regularization parameter, both theoretically and numerically. In this paper, we discuss the properties of the minimum value function. We also propose an efficient method to determine the regularization parameter. A new criterion for the determination of the regularization parameter is also discussed.
Generalized trust-region subproblem (GT) is a nonconvex quadratic optimization with a single quadratic constraint. It reduces to the classical trust-region subproblem (T) if the constraint set is a Euclidean ball. (GT) is polynomially solvable based on its inherent hidden convexity. In this paper, we study local minimizers of (GT). Unlike (T) with at most one local nonglobal minimizer, we can prove that two-dimensional (GT) has at most two local nonglobal minimizers, which are shown by example to be attainable. The main contribution of this paper is to prove that, at any local nonglobal minimizer of (GT), not only the strict complementarity condition holds, but also the standard second-order sufficient optimality condition remains necessary. As a corollary, finding all local nonglobal minimizers of (GT) or proving the nonexistence can be done in polynomial time. Finally, for (GT) in complex domain, we prove that there is no local nonglobal minimizer, which demonstrates that real-valued optimization problem may be more difficult to solve than its complex version.
We propose a primal-dual interior-point method (IPM) with convergence to second-order stationary points (SOSPs) of nonlinear semidefinite optimization problems, abbreviated as NSDPs. As far as we know, the current algorithms for NSDPs only ensure con vergence to first-order stationary points such as Karush-Kuhn-Tucker points. The proposed method generates a sequence approximating SOSPs while minimizing a primal-dual merit function for NSDPs by using scaled gradient directions and directions of negative curvature. Under some assumptions, the generated sequence accumulates at an SOSP with a worst-case iteration complexity. This result is also obtained for a primal IPM with slight modification. Finally, our numerical experiments show the benefits of using directions of negative curvature in the proposed method.
For nonconvex optimization in machine learning, this article proves that every local minimum achieves the globally optimal value of the perturbable gradient basis model at any differentiable point. As a result, nonconvex machine learning is theoretic ally as supported as convex machine learning with a handcrafted basis in terms of the loss at differentiable local minima, except in the case when a preference is given to the handcrafted basis over the perturbable gradient basis. The proofs of these results are derived under mild assumptions. Accordingly, the proven results are directly applicable to many machine learning models, including practical deep neural networks, without any modification of practical methods. Furthermore, as special cases of our general results, this article improves or complements several state-of-the-art theoretical results on deep neural networks, deep residual networks, and overparameterized deep neural networks with a unified proof technique and novel geometric insights. A special case of our results also contributes to the theoretical foundation of representation learning.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا