ترغب بنشر مسار تعليمي؟ اضغط هنا

Generic identifiability and second-order sufficiency in tame convex optimization

234   0   0.0 ( 0 )
 نشر من قبل Adrian Lewis
 تاريخ النشر 2009
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider linear optimization over a fixed compact convex feasible region that is semi-algebraic (or, more generally, tame). Generically, we prove that the optimal solution is unique and lies on a unique manifold, around which the feasible region is partly smooth, ensuring finite identification of the manifold by many optimization algorithms. Furthermore, second-order optimality conditions hold, guaranteeing smooth behavior of the optimal solution under small perturbations to the objective.



قيم البحث

اقرأ أيضاً

In this paper we study second-order optimality conditions for non-convex set-constrained optimization problems. For a convex set-constrained optimization problem, it is well-known that second-order optimality conditions involve the support function o f the second-order tangent set. In this paper we propose two approaches for establishing second-order optimality conditions for the non-convex case. In the first approach we extend the concept of the support function so that it is applicable to general non-convex set-constrained problems, whereas in the second approach we introduce the notion of the directional regular tangent cone and apply classical results of convex duality theory. Besides the second-order optimality conditions, the novelty of our approach lies in the systematic introduction and use, respectively, of direction
This paper provides an $H_2$ optimal scheme for reducing diffusively coupled second-order systems evolving over undirected networks. The aim is to find a reduced-order model that not only approximates the input-output mapping of the original system b ut also preserves crucial structures, such as the second-order form, asymptotically stability, and diffusive couplings. To this end, an $H_2$ optimal approach based on a convex relaxation is implemented to reduce the dimension, yielding a lower order asymptotically stable approximation of the original second-order network system. Then, a novel graph reconstruction approach is employed to convert the obtained model to a reduced system that is interpretable as an undirected diffusively coupled network. Finally, the effectiveness of the proposed method is illustrated via a large-scale networked mass-spring-damper system.
We describe an active-set method for the minimization of an objective function $phi$ that is the sum of a smooth convex function and an $ell_1$-regularization term. A distinctive feature of the method is the way in which active-set identification and {second-order} subspace minimization steps are integrated to combine the predictive power of the two approaches. At every iteration, the algorithm selects a candidate set of free and fixed variables, performs an (inexact) subspace phase, and then assesses the quality of the new active set. If it is not judged to be acceptable, then the set of free variables is restricted and a new active-set prediction is made. We establish global convergence for our approach, and compare the new method against the state-of-the-art code LIBLINEAR.
131 - Josh Taylor 2021
We maximize the production of biogas in a gradostat at steady state. The physical decision variables are the water, substrate, and biomass entering each tank and the flows through the interconnecting pipes. Our main technical focus is the nonconvex c onstraint describing microbial growth. We formulate a relaxation and prove that it is exact when the gradostat is outflow connected, its system matrix is irreducible, and the growth rate satisfies a simple condition. The relaxation has second-order cone representations for the Monod and Contois growth rates. We extend the steady state models to the case of multiple time periods by replacing the derivatives with numerical approximations instead of setting them to zero. The resulting optimizations are second-order cone programs, which can be solved at large scales using standard industrial software.
We design an algorithm which finds an $epsilon$-approximate stationary point (with $| abla F(x)|le epsilon$) using $O(epsilon^{-3})$ stochastic gradient and Hessian-vector products, matching guarantees that were previously available only under a stro nger assumption of access to multiple queries with the same random seed. We prove a lower bound which establishes that this rate is optimal and---surprisingly---that it cannot be improved using stochastic $p$th order methods for any $pge 2$, even when the first $p$ derivatives of the objective are Lipschitz. Together, these results characterize the complexity of non-convex stochastic optimization with second-order methods and beyond. Expanding our scope to the oracle complexity of finding $(epsilon,gamma)$-approximate second-order stationary points, we establish nearly matching upper and lower bounds for stochastic second-order methods. Our lower bounds here are novel even in the noiseless case.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا