Do you want to publish a course? Click here

Terracini Convexity

86   0   0.0 ( 0 )
 Added by James Saunderson
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

We present a generalization of the notion of neighborliness to non-polyhedral convex cones. Although a definition of neighborliness is available in the non-polyhedral case in the literature, it is fairly restrictive as it requires all the low-dimensional faces to be polyhedral. Our approach is more flexible and includes, for example, the cone of positive-semidefinite matrices as a special case (this cone is not neighborly in general). We term our generalization Terracini convexity due to its conceptual similarity with the conclusion of Terracinis lemma from algebraic geometry. Polyhedral cones are Terracini convex if and only if they are neighborly. More broadly, we derive many families of non-polyhedral Terracini convex cones based on neighborly cones, linear images of cones of positive semidefinite matrices, and derivative relaxations of Terracini convex hyperbolicity cones. As a demonstration of the utility of our framework in the non-polyhedral case, we give a characterization based on Terracini convexity of the tightness of semidefinite relaxations for certain inverse problems.

rate research

Read More

Amenability is a notion of facial exposedness for convex cones that is stronger than being facially dual complete (or nice) which is, in turn, stronger than merely being facially exposed. Hyperbolicity cones are a family of algebraically structured closed convex cones that contain all spectrahedra (linear sections of positive semidefinite cones) as special cases. It is known that all spectrahedra are amenable. We establish that all hyperbolicity cones are amenable. As part of the argument, we show that any face of a hyperbolicity cone is a hyperbolicity cone. As a corollary, we show that the intersection of two hyperbolicity cones, not necessarily sharing a common relative interior point, is a hyperbolicity cone.
In 1940, Luis Santalo proved a Helly-type theorem for line transversals to boxes in R^d. An analysis of his proof reveals a convexity structure for ascending lines in R^d that is isomorphic to the ordinary notion of convexity in a convex subset of R^{2d-2}. This isomorphism is through a Cremona transformation on the Grassmannian of lines in P^d, which enables a precise description of the convex hull and affine span of up to d ascending lines: the lines in such an affine span turn out to be the rulings of certain classical determinantal varieties. Finally, we relate Cremona convexity to a new convexity structure that we call frame convexity, which extends to arbitrary-dimensional flats.
We review various characterizations of uniform convexity and smoothness on norm balls in finite-dimensional spaces and connect results stemming from the geometry of Banach spaces with textit{scaling inequalities} used in analysing the convergence of optimization methods. In particular, we establish loca
This paper considers the analysis of continuous time gradient-based optimization algorithms through the lens of nonlinear contraction theory. It demonstrates that in the case of a time-invariant objective, most elementary results on gradient descent based on convexity can be replaced by much more general results based on contraction. In particular, gradient descent converges to a unique equilibrium if its dynamics are contracting in any metric, with convexity of the cost corresponding to the special case of contraction in the identity metric. More broadly, contraction analysis provides new insights for the case of geodesically-convex optimization, wherein non-convex problems in Euclidean space can be transformed to convex ones posed over a Riemannian manifold. In this case, natural gradient descent converges to a unique equilibrium if it is contracting in any metric, with geodesic convexity of the cost corresponding to contraction in the natural metric. New results using semi-contraction provide additional insights into the topology of the set of optimizers in the case when multiple optima exist. Furthermore, they show how semi-contraction may be combined with specific additional information to reach broad conclusions about a dynamical system. The contraction perspective also easily extends to time-varying optimization settings and allows one to recursively build large optimization structures out of simpler elements. Extensions to natural primal-dual optimization and game-theoretic contexts further illustrate the potential reach of these new perspectives.
We introduce and investigate a new generalized convexity notion for functions called prox-convexity. The proximity operator of such a function is single-valued and firmly nonexpansive. We provide examples of (strongly) quasiconvex, weakly convex, and DC (difference of convex) functions that are prox-convex, however none of these classes fully contains the one of prox-convex functions or is included into it. We show that the classical proximal point algorithm remains convergent when the convexity of the proper lower semicontinuous function to be minimized is relaxed to prox-convexity.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا