Adapting to Function Difficulty and Growth Conditions in Private Optimization


Abstract in English

We develop algorithms for private stochastic convex optimization that adapt to the hardness of the specific function we wish to optimize. While previous work provide worst-case bounds for arbitrary convex functions, it is often the case that the function at hand belongs to a smaller class that enjoys faster rates. Concretely, we show that for functions exhibiting $kappa$-growth around the optimum, i.e., $f(x) ge f(x^*) + lambda kappa^{-1} |x-x^*|_2^kappa$ for $kappa > 1$, our algorithms improve upon the standard ${sqrt{d}}/{nvarepsilon}$ privacy rate to the faster $({sqrt{d}}/{nvarepsilon})^{tfrac{kappa}{kappa - 1}}$. Crucially, they achieve these rates without knowledge of the growth constant $kappa$ of the function. Our algorithms build upon the inverse sensitivity mechanism, which adapts to instance difficulty (Asi & Duchi, 2020), and recent localization techniques in private optimization (Feldman et al., 2020). We complement our algorithms with matching lower bounds for these function classes and demonstrate that our adaptive algorithm is emph{simultaneously} (minimax) optimal over all $kappa ge 1+c$ whenever $c = Theta(1)$.

Download