ﻻ يوجد ملخص باللغة العربية
The local nonglobal minimizer of trust-region subproblem, if it exists, is shown to have the second smallest objective function value among all KKT points. This new property is extended to $p$-regularized subproblem. As a corollary, we show for the first time that finding the local nonglobal minimizer of Nesterov-Polyak subproblem corresponds to a generalized eigenvalue problem.
We establish lower bounds on the complexity of finding $epsilon$-stationary points of smooth, non-convex high-dimensional functions using first-order methods. We prove that deterministic first-order methods, even applied to arbitrarily smooth functio
The minimum value function appearing in Tikhonov regularization technique is very useful in determining the regularization parameter, both theoretically and numerically. In this paper, we discuss the properties of the minimum value function. We also
Generalized trust-region subproblem (GT) is a nonconvex quadratic optimization with a single quadratic constraint. It reduces to the classical trust-region subproblem (T) if the constraint set is a Euclidean ball. (GT) is polynomially solvable based
We propose a primal-dual interior-point method (IPM) with convergence to second-order stationary points (SOSPs) of nonlinear semidefinite optimization problems, abbreviated as NSDPs. As far as we know, the current algorithms for NSDPs only ensure con
For nonconvex optimization in machine learning, this article proves that every local minimum achieves the globally optimal value of the perturbable gradient basis model at any differentiable point. As a result, nonconvex machine learning is theoretic