ترغب بنشر مسار تعليمي؟ اضغط هنا

Improving Neural Network Robustness through Neighborhood Preserving Layers

145   0   0.0 ( 0 )
 نشر من قبل Bingyuan Liu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Robustness against adversarial attack in neural networks is an important research topic in the machine learning community. We observe one major source of vulnerability of neural nets is from overparameterized fully-connected layers. In this paper, we propose a new neighborhood preserving layer which can replace these fully connected layers to improve the network robustness. We demonstrate a novel neural network architecture which can incorporate such layers and also can be trained efficiently. We theoretically prove that our models are more robust against distortion because they effectively control the magnitude of gradients. Finally, we empirically show that our designed network architecture is more robust against state-of-art gradient descent based attacks, such as a PGD attack on the benchmark datasets MNIST and CIFAR10.



قيم البحث

اقرأ أيضاً

In this paper, we consider a network of agents with Laplacian dynamics, and study the problem of improving network robustness by adding a maximum number of edges within the network while preserving a lower bound on its strong structural controllabili ty (SSC) at the same time. Edge augmentation increases networks robustness to noise and structural changes, however, it could also deteriorate network controllability. Thus, by exploiting relationship between network controllability and distances between nodes in graphs, we formulate an edge augmentation problem with a constraint to preserve distances between certain node pairs, which in turn guarantees that a lower bound on SSC is maintained even after adding edges. In this direction, first we choose a node pair and maximally add edges while maintaining the distance between selected nodes. We show that an optimal solution belongs to a certain class of graphs called clique chains. Then, we present an algorithm to add edges while preserving distances between a certain collection of nodes. Further, we present a randomized algorithm that guarantees a desired approximation ratio with high probability to solve the edge augmentation problem. Finally, we evaluate our results on various networks.
We initiate the study of the inherent tradeoffs between the size of a neural network and its robustness, as measured by its Lipschitz constant. We make a precise conjecture that, for any Lipschitz activation function and for most datasets, any two-la yers neural network with $k$ neurons that perfectly fit the data must have its Lipschitz constant larger (up to a constant) than $sqrt{n/k}$ where $n$ is the number of datapoints. In particular, this conjecture implies that overparametrization is necessary for robustness, since it means that one needs roughly one neuron per datapoint to ensure a $O(1)$-Lipschitz network, while mere data fitting of $d$-dimensional data requires only one neuron per $d$ datapoints. We prove a weaker version of this conjecture when the Lipschitz constant is replaced by an upper bound on it based on the spectral norm of the weight matrix. We also prove the conjecture in the high-dimensional regime $n approx d$ (which we also refer to as the undercomplete case, since only $k leq d$ is relevant here). Finally we prove the conjecture for polynomial activation functions of degree $p$ when $n approx d^p$. We complement these findings with experimental evidence supporting the conjecture.
Robustness issues arise in a variety of forms and are studied through multiple lenses in the machine learning literature. Neural networks lack adversarial robustness -- they are vulnerable to adversarial examples that through small perturbations to i nputs cause incorrect predictions. Further, trust is undermined when models give miscalibrated or unstable uncertainty estimates, i.e. the predicted probability is not a good indicator of how much we should trust our model and could vary greatly over multiple independent runs. In this paper, we study the connection between adversarial robustness, predictive uncertainty (calibration) and model uncertainty (stability) on multiple classification networks and datasets. We find that the inputs for which the model is sensitive to small perturbations (are easily attacked) are more likely to have poorly calibrated and unstable predictions. Based on this insight, we examine if calibration and stability can be improved by addressing those adversarially unrobust inputs. To this end, we propose Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) that integrates the correlations of adversarial robustness and uncertainty into training by adaptively softening labels conditioned on how easily it can be attacked by adversarial examples. We find that our method, taking the adversarial robustness of the in-distribution data into consideration, leads to better calibration and stability over the model even under distributional shifts. In addition, AR-AdaLS can also be applied to an ensemble model to achieve the best calibration performance.
Recent years have witnessed the popularity and success of graph neural networks (GNN) in various scenarios. To obtain data-specific GNN architectures, researchers turn to neural architecture search (NAS), which has made impressive success in discover ing effective architectures in convolutional neural networks. However, it is non-trivial to apply NAS approaches to GNN due to challenges in search space design and the expensive searching cost of existing NAS methods. In this work, to obtain the data-specific GNN architectures and address the computational challenges facing by NAS approaches, we propose a framework, which tries to Search to Aggregate NEighborhood (SANE), to automatically design data-specific GNN architectures. By designing a novel and expressive search space, we propose a differentiable search algorithm, which is more efficient than previous reinforcement learning based methods. Experimental results on four tasks and seven real-world datasets demonstrate the superiority of SANE compared to existing GNN models and NAS approaches in terms of effectiveness and efficiency. (Code is available at: https://github.com/AutoML-4Paradigm/SANE).
Recently, adversarial deception becomes one of the most considerable threats to deep neural networks. However, compared to extensive research in new designs of various adversarial attacks and defenses, the neural networks intrinsic robustness propert y is still lack of thorough investigation. This work aims to qualitatively interpret the adversarial attack and defense mechanism through loss visualization, and establish a quantitative metric to evaluate the neural network models intrinsic robustness. The proposed robustness metric identifies the upper bound of a models prediction divergence in the given domain and thus indicates whether the model can maintain a stable prediction. With extensive experiments, our metric demonstrates several advantages over conventional adversarial testing accuracy based robustness estimation: (1) it provides a uniformed evaluation to models with different structures and parameter scales; (2) it over-performs conventional accuracy based robustness estimation and provides a more reliable evaluation that is invariant to different test settings; (3) it can be fast generated without considerable testing cost.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا