Optimizing Star-Convex Functions


Abstract in English

We introduce a polynomial time algorithm for optimizing the class of star-convex functions, under no restrictions except boundedness on a region about the origin, and Lebesgue measurability. The algorithms performance is polynomial in the requested number of digits of accuracy, contrasting with the previous best known algorithm of Nesterov and Polyak that has exponential dependence, and that further requires Lipschitz second differentiability of the function, but has milder dependence on the dimension of the domain. Star-convex functions constitute a rich class of functions generalizing convex functions to new parameter regimes, and which confound standard variants of gradient descent; more generally, we construct a family of star-convex functions where gradient-based algorithms provably give no information about the location of the global optimum. We introduce a new randomized algorithm for finding cutting planes based only on function evaluations, where, counterintuitively, the algorithm must look outside the feasible region to discover the structure of the star-convex function that lets it compute the next cut of the feasible region. We emphasize that the class of star-convex functions we consider is as unrestricted as possible: the class of Lebesgue measurable star-convex functions has theoretical appeal, introducing to the domain of polynomial-time algorithms a huge class with many interesting pathologies. We view our results as a step forward in understanding the scope of optimization techniques beyond the garden of convex optimization and local gradient-based methods.

Download