No Arabic abstract
Adversarial robust models have been shown to learn more robust and interpretable features than standard trained models. As shown in [cite{tsipras2018robustness}], such robust models inherit useful interpretable properties where the gradient aligns perceptually well with images, and adding a large targeted adversarial perturbation leads to an image resembling the target class. We perform experiments to show that interpretable and perceptually aligned gradients are present even in models that do not show high robustness to adversarial attacks. Specifically, we perform adversarial training with attack for different max-perturbation bound. Adversarial training with low max-perturbation bound results in models that have interpretable features with only slight drop in performance over clean samples. In this paper, we leverage models with interpretable perceptually-aligned features and show that adversarial training with low max-perturbation bound can improve the performance of models for zero-shot and weakly supervised localization tasks.
Given the rapid changes in telecommunication systems and their higher dependence on artificial intelligence, it is increasingly important to have models that can perform well under different, possibly adverse, conditions. Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications. However, in other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations, which consist of imperceptible crafted noise that when added to the data fools the model into misclassification. This puts into question the security of DNNs in communication tasks, and in particular in modulation recognition. We propose a novel framework to test the robustness of current state-of-the-art models where the adversarial perturbation strength is dependent on the signal strength and measured with the signal to perturbation ratio (SPR). We show that current state-of-the-art models are susceptible to these perturbations. In contrast to current research on the topic of image classification, modulation recognition allows us to have easily accessible insights on the usefulness of the features learned by DNNs by looking at the constellation space. When analyzing these vulnerable models we found that adversarial perturbations do not shift the symbols towards the nearest classes in constellation space. This shows that DNNs do not base their decisions on signal statistics that are important for the Bayes-optimal modulation recognition model, but spurious correlations in the training data. Our feature analysis and proposed framework can help in the task of finding better models for communication systems.
Hamiltonian Monte Carlo (HMC) is a state-of-the-art Markov chain Monte Carlo sampling algorithm for drawing samples from smooth probability densities over continuous spaces. We study the variant most widely used in practice, Metropolized HMC with the St{o}rmer-Verlet or leapfrog integrator, and make two primary contributions. First, we provide a non-asymptotic upper bound on the mixing time of the Metropolized HMC with explicit choices of step-size and number of leapfrog steps. This bound gives a precise quantification of the faster convergence of Metropolized HMC relative to simpler MCMC algorithms such as the Metropolized random walk, or Metropolized Langevin algorithm. Second, we provide a general framework for sharpening mixing time bounds of Markov chains initialized at a substantial distance from the target distribution over continuous spaces. We apply this sharpening device to the Metropolized random walk and Langevin algorithms, thereby obtaining improved mixing time bounds from a non-warm initial distribution.
Popular approaches to natural language processing create word embeddings based on textual co-occurrence patterns, but often ignore embodied, sensory aspects of language. Here, we introduce the Python package comp-syn, which provides grounded word embeddings based on the perceptually uniform color distributions of Google Image search results. We demonstrate that comp-syn significantly enriches models of distributional semantics. In particular, we show that (1) comp-syn predicts human judgments of word concreteness with greater accuracy and in a more interpretable fashion than word2vec using low-dimensional word-color embeddings, and (2) comp-syn performs comparably to word2vec on a metaphorical vs. literal word-pair classification task. comp-syn is open-source on PyPi and is compatible with mainstream machine-learning Python packages. Our package release includes word-color embeddings for over 40,000 English words, each associated with crowd-sourced word concreteness judgments.
Face recognition is greatly improved by deep convolutional neural networks (CNNs). Recently, these face recognition models have been used for identity authentication in security sensitive applications. However, deep CNNs are vulnerable to adversarial patches, which are physically realizable and stealthy, raising new security concerns on the real-world applications of these models. In this paper, we evaluate the robustness of face recognition models using adversarial patches based on transferability, where the attacker has limited accessibility to the target models. First, we extend the existing transfer-based attack techniques to generate transferable adversarial patches. However, we observe that the transferability is sensitive to initialization and degrades when the perturbation magnitude is large, indicating the overfitting to the substitute models. Second, we propose to regularize the adversarial patches on the low dimensional data manifold. The manifold is represented by generative models pre-trained on legitimate human face images. Using face-like features as adversarial perturbations through optimization on the manifold, we show that the gaps between the responses of substitute models and the target models dramatically decrease, exhibiting a better transferability. Extensive digital world experiments are conducted to demonstrate the superiority of the proposed method in the black-box setting. We apply the proposed method in the physical world as well.
Intestinal parasites are responsible for several diseases in human beings. In order to eliminate the error-prone visual analysis of optical microscopy slides, we have investigated automated, fast, and low-cost systems for the diagnosis of human intestinal parasites. In this work, we present a hybrid approach that combines the opinion of two decision-making systems with complementary properties: ($DS_1$) a simpler system based on very fast handcrafted image feature extraction and support vector machine classification and ($DS_2$) a more complex system based on a deep neural network, Vgg-16, for image feature extraction and classification. $DS_1$ is much faster than $DS_2$, but it is less accurate than $DS_2$. Fortunately, the errors of $DS_1$ are not the same of $DS_2$. During training, we use a validation set to learn the probabilities of misclassification by $DS_1$ on each class based on its confidence values. When $DS_1$ quickly classifies all images from a microscopy slide, the method selects a number of images with higher chances of misclassification for characterization and reclassification by $DS_2$. Our hybrid system can improve the overall effectiveness without compromising efficiency, being suitable for the clinical routine -- a strategy that might be suitable for other real applications. As demonstrated on large datasets, the proposed system can achieve, on average, 94.9%, 87.8%, and 92.5% of Cohens Kappa on helminth eggs, helminth larvae, and protozoa cysts, respectively.