ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples

326   0   0.0 ( 0 )
 نشر من قبل Richard Zhang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Richard Y. Zhang




اسأل ChatGPT حول البحث

The robustness of a neural network to adversarial examples can be provably certified by solving a convex relaxation. If the relaxation is loose, however, then the resulting certificate can be too conservative to be practically useful. Recently, a less conservative robustness certificate was proposed, based on a semidefinite programming (SDP) relaxation of the ReLU activation function. In this paper, we describe a geometric technique that determines whether this SDP certificate is exact, meaning whether it provides both a lower-bound on the size of the smallest adversarial perturbation, as well as a globally optimal perturbation that attains the lower-bound. Concretely, we show, for a least-squares restriction of the usual adversarial attack problem, that the SDP relaxation amounts to the nonconvex projection of a point onto a hyperbola. The resulting SDP certificate is exact if and only if the projection of the point lies on the major axis of the hyperbola. Using this geometric technique, we prove that the certificate is exact over a single hidden layer under mild assumptions, and explain why it is usually conservative for several hidden layers. We experimentally confirm our theoretical insights using a general-purpose interior-point method and a custom rank-2 Burer-Monteiro algorithm.



قيم البحث

اقرأ أيضاً

The multiple-input multiple-output (MIMO) detection problem, a fundamental problem in modern digital communications, is to detect a vector of transmitted symbols from the noisy outputs of a fading MIMO channel. The maximum likelihood detector can be formulated as a complex least-squares problem with discrete variables, which is NP-hard in general. Various semidefinite relaxation (SDR) methods have been proposed in the literature to solve the problem due to their polynomial-time worst-case complexity and good detection error rate performance. In this paper, we consider two popular classes of SDR-based detectors and study the conditions under which the SDRs are tight and the relationship between different SDR models. For the enhanced complex and real SDRs proposed recently by Lu et al., we refine their analysis and derive the necessary and sufficient condition for the complex SDR to be tight, as well as a necessary condition for the real SDR to be tight. In contrast, we also show that another SDR proposed by Mobasher et al. is not tight with high probability under mild conditions. Moreover, we establish a general theorem that shows the equivalence between two subsets of positive semidefinite matrices in different dimensions by exploiting a special separable structure in the constraints. Our theorem recovers two existing equivalence results of SDRs defined in different settings and has the potential to find other applications due to its generality.
We study the problem of maximizing the geometric mean of $d$ low-degree non-negative forms on the real or complex sphere in $n$ variables. We show that this highly non-convex problem is NP-hard even when the forms are quadratic and is equivalent to o ptimizing a homogeneous polynomial of degree $O(d)$ on the sphere. The standard Sum-of-Squares based convex relaxation for this polynomial optimization problem requires solving a semidefinite program (SDP) of size $n^{O(d)}$, with multiplicative approximation guarantees of $Omega(frac{1}{n})$. We exploit the compact representation of this polynomial to introduce a SDP relaxation of size polynomial in $n$ and $d$, and prove that it achieves a constant factor multiplicative approximation when maximizing the geometric mean of non-negative quadratic forms. We also show that this analysis is asymptotically tight, with a sequence of instances where the gap between the relaxation and true optimum approaches this constant factor as $d rightarrow infty$. Next we propose a series of intermediate relaxations of increasing complexity that interpolate to the full Sum-of-Squares relaxation, as well as a rounding algorithm that finds an approximate solution from the solution of any intermediate relaxation. Finally we show that this approach can be generalized for relaxations of products of non-negative forms of any degree.
This paper investigates the theory of robustness against adversarial attacks. We focus on randomized classifiers (emph{i.e.} classifiers that output random variables) and provide a thorough analysis of their behavior through the lens of statistical l earning theory and information theory. To this aim, we introduce a new notion of robustness for randomized classifiers, enforcing local Lipschitzness using probability metrics. Equipped with this definition, we make two new contributions. The first one consists in devising a new upper bound on the adversarial generalization gap of randomized classifiers. More precisely, we devise bounds on the generalization gap and the adversarial gap (emph{i.e.} the gap between the risk and the worst-case risk under attack) of randomized classifiers. The second contribution presents a yet simple but efficient noise injection method to design robust randomized classifiers. We show that our results are applicable to a wide range of machine learning models under mild hypotheses. We further corroborate our findings with experimental results using deep neural networks on standard image datasets, namely CIFAR-10 and CIFAR-100. All robust models we trained models can simultaneously achieve state-of-the-art accuracy (over $0.82$ clean accuracy on CIFAR-10) and enjoy emph{guaranteed} robust accuracy bounds ($0.45$ against $ell_2$ adversaries with magnitude $0.5$ on CIFAR-10).
This short note highlights some links between two lines of research within the emerging topic of trustworthy machine learning: differential privacy and robustness to adversarial examples. By abstracting the definitions of both notions, we show that t hey build upon the same theoretical ground and hence results obtained so far in one domain can be transferred to the other. More precisely, our analysis is based on two key elements: probabilistic mappings (also called randomized algorithms in the differential privacy community), and the Renyi divergence which subsumes a large family of divergences. We first generalize the definition of robustness against adversarial examples to encompass probabilistic mappings. Then we observe that Renyi-differential privacy (a generalization of differential privacy recently proposed in~cite{Mironov2017RenyiDP}) and our definition of robustness share several similarities. We finally discuss how can both communities benefit from this connection to transfer technical tools from one research field to the other.
Despite the remarkable success of deep neural networks, significant concerns have emerged about their robustness to adversarial perturbations to inputs. While most attacks aim to ensure that these are imperceptible, physical perturbation attacks typi cally aim for being unsuspicious, even if perceptible. However, there is no universal notion of what it means for adversarial examples to be unsuspicious. We propose an approach for modeling suspiciousness by leveraging cognitive salience. Specifically, we split an image into foreground (salient region) and background (the rest), and allow significantly larger adversarial perturbations in the background, while ensuring that cognitive salience of background remains low. We describe how to compute the resulting non-salience-preserving dual-perturbation attacks on classifiers. We then experimentally demonstrate that our attacks indeed do not significantly change perceptual salience of the background, but are highly effective against classifiers robust to conventional attacks. Furthermore, we show that adversarial training with dual-perturbation attacks yields classifiers that are more robust to these than state-of-the-art robust learning approaches, and comparable in terms of robustness to conventional attacks.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا