ﻻ يوجد ملخص باللغة العربية
We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on $L_p$ perturbations. We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem. An interesting implication of our findings is that the $L_{infty}$ perturbations case is provably computationally harder than the case $2 leq p < infty$.
We study the problem of {em properly} learning large margin halfspaces in the agnostic PAC model. In more detail, we study the complexity of properly learning $d$-dimensional halfspaces on the unit ball within misclassification error $alpha cdot math
We present a differentially private learner for halfspaces over a finite grid $G$ in $mathbb{R}^d$ with sample complexity $approx d^{2.5}cdot 2^{log^*|G|}$, which improves the state-of-the-art result of [Beimel et al., COLT 2019] by a $d^2$ factor. T
Over recent years, devising classification algorithms that are robust to adversarial perturbations has emerged as a challenging problem. In particular, deep neural nets (DNNs) seem to be susceptible to small imperceptible changes over test instances.
In this work, we initiate a formal study of probably approximately correct (PAC) learning under evasion attacks, where the adversarys goal is to emph{misclassify} the adversarially perturbed sample point $widetilde{x}$, i.e., $h(widetilde{x}) eq c(wi
Making learners robust to adversarial perturbation at test time (i.e., evasion attacks) or training time (i.e., poisoning attacks) has emerged as a challenging task. It is known that for some natural settings, sublinear perturbations in the training