ﻻ يوجد ملخص باللغة العربية
We show that adding differential privacy to Explainable Boosting Machines (EBMs), a recent method for training interpretable ML models, yields state-of-the-art accuracy while protecting privacy. Our experiments on multiple classification and regression datasets show that DP-EBM models suffer surprisingly little accuracy loss even with strong differential privacy guarantees. In addition to high accuracy, two other benefits of applying DP to EBMs are: a) trained models provide exact global and local interpretability, which is often important in settings where differential privacy is needed; and b) the models can be edited after training without loss of privacy to correct errors which DP noise may have introduced.
Characterizing the privacy degradation over compositions, i.e., privacy accounting, is a fundamental topic in differential privacy (DP) with many applications to differentially private machine learning and federated learning. We propose a unificati
Recent research in differential privacy demonstrated that (sub)sampling can amplify the level of protection. For example, for $epsilon$-differential privacy and simple random sampling with sampling rate $r$, the actual privacy guarantee is approximat
Sensitive statistics are often collected across sets of users, with repeated collection of reports done over time. For example, trends in users private preferences or software usage may be monitored via such reports. We study the collection of such s
Traditional approaches to differential privacy assume a fixed privacy requirement $epsilon$ for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint. As differential privacy is increasingly deployed
Bayesian neural network (BNN) allows for uncertainty quantification in prediction, offering an advantage over regular neural networks that has not been explored in the differential privacy (DP) framework. We fill this important gap by leveraging rece