ترغب بنشر مسار تعليمي؟ اضغط هنا

Generic minimizing behavior in semi-algebraic optimization

115   0   0.0 ( 0 )
 نشر من قبل Dmitriy Drusvyatskiy
 تاريخ النشر 2015
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a theorem of Sard type for semi-algebraic set-valued mappings whose graphs have dimension no larger than that of their range space: the inverse of such a mapping admits a single-valued analytic localization around any pair in the graph, for a generic value parameter. This simple result yields a transparent and unified treatment of generic properties of semi-algebraic optimization problems: typical semi-algebraic problems have finitely many critical points, around each of which they admit a unique active manifold (analogue of an active set in nonlinear optimization); moreover, such critical points satisfy strict complementarity and second-order sufficient conditions for optimality are indeed necessary.



قيم البحث

اقرأ أيضاً

183 - N.H. Chieu , J.W. Feng , W. Gao 2017
In this paper, we introduce a new class of nonsmooth convex functions called SOS-convex semialgebraic functions extending the recently proposed notion of SOS-convex polynomials. This class of nonsmooth convex functions covers many common nonsmooth fu nctions arising in the applications such as the Euclidean norm, the maximum eigenvalue function and the least squares functions with $ell_1$-regularization or elastic net regularization used in statistics and compressed sensing. We show that, under commonly used strict feasibility conditions, the optimal value and an optimal solution of SOS-convex semi-algebraic programs can be found by solving a single semi-definite programming problem (SDP). We achieve the results by using tools from semi-algebraic geometry, convex-concave minimax theorem and a recently established Jensen inequality type result for SOS-convex polynomials. As an application, we outline how the derived results can be applied to show that robust SOS-convex optimization problems under restricted spectrahedron data uncertainty enjoy exact SDP relaxations. This extends the existing exact SDP relaxation result for restricted ellipsoidal data uncertainty and answers the open questions left in [Optimization Letters 9, 1-18(2015)] on how to recover a robust solution from the semi-definite programming relaxation in this broader setting.
171 - Tingran Gao , Lek-Heng Lim , Ke Ye 2018
We introduce in this paper a manifold optimization framework that utilizes semi-Riemannian structures on the underlying smooth manifolds. Unlike in Riemannian geometry, where each tangent space is equipped with a positive definite inner product, a se mi-Riemannian manifold allows the metric tensor to be indefinite on each tangent space, i.e., possessing both positive and negative definite subspaces; differential geometric objects such as geodesics and parallel-transport can be defined on non-degenerate semi-Riemannian manifolds as well, and can be carefully leveraged to adapt Riemannian optimization algorithms to the semi-Riemannian setting. In particular, we discuss the metric independence of manifold optimization algorithms, and illustrate that the weaker but more general semi-Riemannian geometry often suffices for the purpose of optimizing smooth functions on smooth manifolds in practice.
We provide a numerical scheme to approximate as closely as desired the Gaussian or exponential measure $mu(om)$ of (not necessarily compact) basic semi-algebraic sets$omsubsetR^n$. We obtain two monotone (non increasing and non decreasing) sequences of upper and lower bounds $(overline{omega}_d)$, $(underline{omega}_d)$, $dinN$, each converging to $mu(om)$ as $dtoinfty$. For each $d$, computing $overline{omega}_d$ or $underline{omega}_d$reduces to solving a semidefinite program whose size increases with $d$. Some preliminary (small dimension) computational experiments are encouraging and illustrate thepotential of the method. The method also works for any measure whose moments are known and which satisfies Carlemans condition.
We consider linear optimization over a fixed compact convex feasible region that is semi-algebraic (or, more generally, tame). Generically, we prove that the optimal solution is unique and lies on a unique manifold, around which the feasible region i s partly smooth, ensuring finite identification of the manifold by many optimization algorithms. Furthermore, second-order optimality conditions hold, guaranteeing smooth behavior of the optimal solution under small perturbations to the objective.
In this paper, we introduce various mechanisms to obtain accelerated first-order stochastic optimization algorithms when the objective function is convex or strongly convex. Specifically, we extend the Catalyst approach originally designed for determ inistic objectives to the stochastic setting. Given an optimization method with mild convergence guarantees for strongly convex problems, the challenge is to accelerate convergence to a noise-dominated region, and then achieve convergence with an optimal worst-case complexity depending on the noise variance of the gradients. A side contribution of our work is also a generic analysis that can handle inexact proximal operators, providing new insights about the robustness of stochastic algorithms when the proximal operator cannot be exactly computed.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا