ﻻ يوجد ملخص باللغة العربية
Through the lens of information-theoretic reductions, we examine a reductions approach to fair optimization and learning where a black-box optimizer is used to learn a fair model for classification or regression. Quantifying the complexity, both statistically and computationally, of making such models satisfy the rigorous definition of differential privacy is our end goal. We resolve a few open questions and show applicability to fair machine learning, hypothesis testing, and to optimizing non-standard measures of classification loss. Furthermore, our sample complexity bounds are tight amongst all strategies that jointly minimize a composition of functions. The reductions approach to fair optimization can be abstracted as the constrained group-objective optimization problem where we aim to optimize an objective that is a function of losses of individual groups, subject to some constraints. We give the first polynomial-time algorithms to solve the problem with $(epsilon, 0)$ or $(epsilon, delta)$ differential privacy guarantees when defined on a convex decision set (for example, the $ell_P$ unit ball) with convex constraints and losses. Accompanying information-theoretic lower bounds for the problem are presented. In addition, compared to a previous method for ensuring differential privacy subject to a relaxed form of the equalized odds fairness constraint, the $(epsilon, delta)$-differentially private algorithm we present provides asymptotically better sample complexity guarantees, resulting in an exponential improvement in certain parameter regimes. We introduce a class of bounded divergence linear optimizers, which could be of independent interest, and specialize to pure and approximate differential privacy.
This paper presents the private-outsourced-Gaussian process-upper confidence bound (PO-GP-UCB) algorithm, which is the first algorithm for privacy-preserving Bayesian optimization (BO) in the outsourced setting with a provable performance guarantee.
Finding efficient, easily implementable differentially private (DP) algorithms that offer strong excess risk bounds is an important problem in modern machine learning. To date, most work has focused on private empirical risk minimization (ERM) or pri
In shuffle privacy, each user sends a collection of randomized messages to a trusted shuffler, the shuffler randomly permutes these messages, and the resulting shuffled collection of messages must satisfy differential privacy. Prior work in this mode
We develop algorithms for private stochastic convex optimization that adapt to the hardness of the specific function we wish to optimize. While previous work provide worst-case bounds for arbitrary convex functions, it is often the case that the func
We study differentially private (DP) algorithms for stochastic convex optimization (SCO). In this problem the goal is to approximately minimize the population loss given i.i.d. samples from a distribution over convex and Lipschitz loss functions. A l