ﻻ يوجد ملخص باللغة العربية
We develop algorithms for private stochastic convex optimization that adapt to the hardness of the specific function we wish to optimize. While previous work provide worst-case bounds for arbitrary convex functions, it is often the case that the function at hand belongs to a smaller class that enjoys faster rates. Concretely, we show that for functions exhibiting $kappa$-growth around the optimum, i.e., $f(x) ge f(x^*) + lambda kappa^{-1} |x-x^*|_2^kappa$ for $kappa > 1$, our algorithms improve upon the standard ${sqrt{d}}/{nvarepsilon}$ privacy rate to the faster $({sqrt{d}}/{nvarepsilon})^{tfrac{kappa}{kappa - 1}}$. Crucially, they achieve these rates without knowledge of the growth constant $kappa$ of the function. Our algorithms build upon the inverse sensitivity mechanism, which adapts to instance difficulty (Asi & Duchi, 2020), and recent localization techniques in private optimization (Feldman et al., 2020). We complement our algorithms with matching lower bounds for these function classes and demonstrate that our adaptive algorithm is emph{simultaneously} (minimax) optimal over all $kappa ge 1+c$ whenever $c = Theta(1)$.
This paper presents the private-outsourced-Gaussian process-upper confidence bound (PO-GP-UCB) algorithm, which is the first algorithm for privacy-preserving Bayesian optimization (BO) in the outsourced setting with a provable performance guarantee.
In shuffle privacy, each user sends a collection of randomized messages to a trusted shuffler, the shuffler randomly permutes these messages, and the resulting shuffled collection of messages must satisfy differential privacy. Prior work in this mode
Through the lens of information-theoretic reductions, we examine a reductions approach to fair optimization and learning where a black-box optimizer is used to learn a fair model for classification or regression. Quantifying the complexity, both stat
Finding efficient, easily implementable differentially private (DP) algorithms that offer strong excess risk bounds is an important problem in modern machine learning. To date, most work has focused on private empirical risk minimization (ERM) or pri
We study differentially private (DP) algorithms for stochastic convex optimization (SCO). In this problem the goal is to approximately minimize the population loss given i.i.d. samples from a distribution over convex and Lipschitz loss functions. A l