ترغب بنشر مسار تعليمي؟ اضغط هنا

Lower Bounds for Adversarially Robust PAC Learning

71   0   0.0 ( 0 )
 نشر من قبل Mohammad Mahmoody
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this work, we initiate a formal study of probably approximately correct (PAC) learning under evasion attacks, where the adversarys goal is to emph{misclassify} the adversarially perturbed sample point $widetilde{x}$, i.e., $h(widetilde{x}) eq c(widetilde{x})$, where $c$ is the ground truth concept and $h$ is the learned hypothesis. Previous work on PAC learning of adversarial examples have all modeled adversarial examples as corrupted inputs in which the goal of the adversary is to achieve $h(widetilde{x}) eq c(x)$, where $x$ is the original untampered instance. These two definitions of adversarial risk coincide for many natural distributions, such as images, but are incomparable in general. We first prove that for many theoretically natural input spaces of high dimension $n$ (e.g., isotropic Gaussian in dimension $n$ under $ell_2$ perturbations), if the adversary is allowed to apply up to a sublinear $o(||x||)$ amount of perturbations on the test instances, PAC learning requires sample complexity that is exponential in $n$. This is in contrast with results proved using the corrupted-input framework, in which the sample complexity of robust learning is only polynomially more. We then formalize hybrid attacks in which the evasion attack is preceded by a poisoning attack. This is perhaps reminiscent of trapdoor attacks in which a poisoning phase is involved as well, but the evasion phase here uses the error-region definition of risk that aims at misclassifying the perturbed instances. In this case, we show PAC learning is sometimes impossible all together, even when it is possible without the attack (e.g., due to the bounded VC dimension).

قيم البحث

اقرأ أيضاً

Over recent years, devising classification algorithms that are robust to adversarial perturbations has emerged as a challenging problem. In particular, deep neural nets (DNNs) seem to be susceptible to small imperceptible changes over test instances. However, the line of work in provable robustness, so far, has been focused on information-theoretic robustness, ruling out even the existence of any adversarial examples. In this work, we study whether there is a hope to benefit from algorithmic nature of an attacker that searches for adversarial examples, and ask whether there is any learning task for which it is possible to design classifiers that are only robust against polynomial-time adversaries. Indeed, numerous cryptographic tasks can only be secure against computationally bounded adversaries, and are indeed impossible for computationally unbounded attackers. Thus, it is natural to ask if the same strategy could help robust learning. We show that computational limitation of attackers can indeed be useful in robust learning by demonstrating the possibility of a classifier for some learning task for which computational and information theoretic adversaries of bounded perturbations have very different power. Namely, while computationally unbounded adversaries can attack successfully and find adversarial examples with small perturbation, polynomial time adversaries are unable to do so unless they can break standard cryptographic hardness assumptions. Our results, therefore, indicate that perhaps a similar approach to cryptography (relying on computational hardness) holds promise for achieving computationally robust machine learning. On the reverse directions, we also show that the existence of such learning task in which computational robustness beats information theoretic robustness requires computational hardness by implying (average-case) hardness of NP.
Making learners robust to adversarial perturbation at test time (i.e., evasion attacks) or training time (i.e., poisoning attacks) has emerged as a challenging task. It is known that for some natural settings, sublinear perturbations in the training phase or the testing phase can drastically decrease the quality of the predictions. These negative results, however, are information theoretic and only prove the existence of such successful adversarial perturbations. A natural question for these settings is whether or not we can make classifiers computationally robust to polynomial-time attacks. In this work, we prove strong barriers against achieving such envisioned computational robustness both for evasion and poisoning attacks. In particular, we show that if the test instances come from a product distribution (e.g., uniform over ${0,1}^n$ or $[0,1]^n$, or isotropic $n$-variate Gaussian) and that there is an initial constant error, then there exists a polynomial-time attack that finds adversarial examples of Hamming distance $O(sqrt n)$. For poisoning attacks, we prove that for any learning algorithm with sample complexity $m$ and any efficiently computable predicate defining some bad property $B$ for the produced hypothesis (e.g., failing on a particular test) that happens with an initial constant probability, there exist polynomial-time online poisoning attacks that tamper with $O (sqrt m)$ many examples, replace them with other correctly labeled examples, and increases the probability of the bad event $B$ to $approx 1$. Both of our poisoning and evasion attacks are black-box in how they access their corresponding components of the system (i.e., the hypothesis, the concept, and the learning algorithm) and make no further assumptions about the classifier or the learning algorithm producing the classifier.
We study the problem of high-dimensional linear regression in a robust model where an $epsilon$-fraction of the samples can be adversarially corrupted. We focus on the fundamental setting where the covariates of the uncorrupted samples are drawn from a Gaussian distribution $mathcal{N}(0, Sigma)$ on $mathbb{R}^d$. We give nearly tight upper bounds and computational lower bounds for this problem. Specifically, our main contributions are as follows: For the case that the covariance matrix is known to be the identity, we give a sample near-optimal and computationally efficient algorithm that outputs a candidate hypothesis vector $widehat{beta}$ which approximates the unknown regression vector $beta$ within $ell_2$-norm $O(epsilon log(1/epsilon) sigma)$, where $sigma$ is the standard deviation of the random observation noise. An error of $Omega (epsilon sigma)$ is information-theoretically necessary, even with infinite sample size. Prior work gave an algorithm for this problem with sample complexity $tilde{Omega}(d^2/epsilon^2)$ whose error guarantee scales with the $ell_2$-norm of $beta$. For the case of unknown covariance, we show that we can efficiently achieve the same error guarantee as in the known covariance case using an additional $tilde{O}(d^2/epsilon^2)$ unlabeled examples. On the other hand, an error of $O(epsilon sigma)$ can be information-theoretically attained with $O(d/epsilon^2)$ samples. We prove a Statistical Query (SQ) lower bound providing evidence that this quadratic tradeoff in the sample size is inherent. More specifically, we show that any polynomial time SQ learning algorithm for robust linear regression (in Hubers contamination model) with estimation complexity $O(d^{2-c})$, where $c>0$ is an arbitrarily small constant, must incur an error of $Omega(sqrt{epsilon} sigma)$.
In this work, we show, for the well-studied problem of learning parity under noise, where a learner tries to learn $x=(x_1,ldots,x_n) in {0,1}^n$ from a stream of random linear equations over $mathrm{F}_2$ that are correct with probability $frac{1}{2 }+varepsilon$ and flipped with probability $frac{1}{2}-varepsilon$, that any learning algorithm requires either a memory of size $Omega(n^2/varepsilon)$ or an exponential number of samples. In fact, we study memory-sample lower bounds for a large class of learning problems, as characterized by [GRT18], when the samples are noisy. A matrix $M: A times X rightarrow {-1,1}$ corresponds to the following learning problem with error parameter $varepsilon$: an unknown element $x in X$ is chosen uniformly at random. A learner tries to learn $x$ from a stream of samples, $(a_1, b_1), (a_2, b_2) ldots$, where for every $i$, $a_i in A$ is chosen uniformly at random and $b_i = M(a_i,x)$ with probability $1/2+varepsilon$ and $b_i = -M(a_i,x)$ with probability $1/2-varepsilon$ ($0<varepsilon< frac{1}{2}$). Assume that $k,ell, r$ are such that any submatrix of $M$ of at least $2^{-k} cdot |A|$ rows and at least $2^{-ell} cdot |X|$ columns, has a bias of at most $2^{-r}$. We show that any learning algorithm for the learning problem corresponding to $M$, with error, requires either a memory of size at least $Omegaleft(frac{k cdot ell}{varepsilon} right)$, or at least $2^{Omega(r)}$ samples. In particular, this shows that for a large class of learning problems, same as those in [GRT18], any learning algorithm requires either a memory of size at least $Omegaleft(frac{(log |X|) cdot (log |A|)}{varepsilon}right)$ or an exponential number of noisy samples. Our proof is based on adapting the arguments in [Raz17,GRT18] to the noisy case.
Function inversion is the problem that given a random function $f: [M] to [N]$, we want to find pre-image of any image $f^{-1}(y)$ in time $T$. In this work, we revisit this problem under the preprocessing model where we can compute some auxiliary in formation or advice of size $S$ that only depends on $f$ but not on $y$. It is a well-studied problem in the classical settings, however, it is not clear how quantum algorithms can solve this task any better besides invoking Grovers algorithm, which does not leverage the power of preprocessing. Nayebi et al. proved a lower bound $ST^2 ge tildeOmega(N)$ for quantum algorithms inverting permutations, however, they only consider algorithms with classical advice. Hhan et al. subsequently extended this lower bound to fully quantum algorithms for inverting permutations. In this work, we give the same asymptotic lower bound to fully quantum algorithms for inverting functions for fully quantum algorithms under the regime where $M = O(N)$. In order to prove these bounds, we generalize the notion of quantum random access code, originally introduced by Ambainis et al., to the setting where we are given a list of (not necessarily independent) random variables, and we wish to compress them into a variable-length encoding such that we can retrieve a random element just using the encoding with high probability. As our main technical contribution, we give a nearly tight lower bound (for a wide parameter range) for this generalized notion of quantum random access codes, which may be of independent interest.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا