Do you want to publish a course? Click here

Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs

76   0   0.0 ( 0 )
 Added by Hui-Po Wang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

While Generative Adversarial Networks (GANs) show increasing performance and the level of realism is becoming indistinguishable from natural images, this also comes with high demands on data and computation. We show that state-of-the-art GAN models -- such as they are being publicly released by researchers and industry -- can be used for a range of applications beyond unconditional image generation. We achieve this by an iterative scheme that also allows gaining control over the image generation process despite the highly non-linear latent spaces of the latest GAN models. We demonstrate that this opens up the possibility to re-use state-of-the-art, difficult to train, pre-trained GANs with a high level of control even if only black-box access is granted. Our work also raises concerns and awareness that the use cases of a published GAN model may well reach beyond the creators intention, which needs to be taken into account before a full public release. Code is available at https://github.com/a514514772/hijackgan.



rate research

Read More

Nowadays, digital facial content manipulation has become ubiquitous and realistic with the success of generative adversarial networks (GANs), making face recognition (FR) systems suffer from unprecedented security concerns. In this paper, we investigate and introduce a new type of adversarial attack to evade FR systems by manipulating facial content, called textbf{underline{a}dversarial underline{mor}phing underline{a}ttack} (a.k.a. Amora). In contrast to adversarial noise attack that perturbs pixel intensity values by adding human-imperceptible noise, our proposed adversarial morphing attack works at the semantic level that perturbs pixels spatially in a coherent manner. To tackle the black-box attack problem, we devise a simple yet effective joint dictionary learning pipeline to obtain a proprietary optical flow field for each attack. Our extensive evaluation on two popular FR systems demonstrates the effectiveness of our adversarial morphing attack at various levels of morphing intensity with smiling facial expression manipulations. Both open-set and closed-set experimental results indicate that a novel black-box adversarial attack based on local deformation is possible, and is vastly different from additive noise attacks. The findings of this work potentially pave a new research direction towards a more thorough understanding and investigation of image-based adversarial attacks and defenses.
Adversarial examples are known as carefully perturbed images fooling image classifiers. We propose a geometric framework to generate adversarial examples in one of the most challenging black-box settings where the adversary can only generate a small number of queries, each of them returning the top-$1$ label of the classifier. Our framework is based on the observation that the decision boundary of deep networks usually has a small mean curvature in the vicinity of data samples. We propose an effective iterative algorithm to generate query-efficient black-box perturbations with small $ell_p$ norms for $p ge 1$, which is confirmed via experimental evaluations on state-of-the-art natural image classifiers. Moreover, for $p=2$, we theoretically show that our algorithm actually converges to the minimal $ell_2$-perturbation when the curvature of the decision boundary is bounded. We also obtain the optimal distribution of the queries over the iterations of the algorithm. Finally, experimental results confirm that our principled black-box attack algorithm performs better than state-of-the-art algorithms as it generates smaller perturbations with a reduced number of queries.
Face recognition has obtained remarkable progress in recent years due to the great improvement of deep convolutional neural networks (CNNs). However, deep CNNs are vulnerable to adversarial examples, which can cause fateful consequences in real-world face recognition applications with security-sensitive purposes. Adversarial attacks are widely studied as they can identify the vulnerability of the models before they are deployed. In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model. This attack setting is more practical in real-world face recognition systems. To improve the efficiency of previous methods, we propose an evolutionary attack algorithm, which can model the local geometries of the search directions and reduce the dimension of the search space. Extensive experiments demonstrate the effectiveness of the proposed method that induces a minimum perturbation to an input face image with fewer queries. We also apply the proposed method to attack a real-world face recognition system successfully.
135 - Chen Ma , Shuyu Cheng , Li Chen 2020
We propose a simple and highly query-efficient black-box adversarial attack named SWITCH, which has a state-of-the-art performance in the score-based setting. SWITCH features a highly efficient and effective utilization of the gradient of a surrogate model $hat{mathbf{g}}$ w.r.t. the input image, i.e., the transferable gradient. In each iteration, SWITCH first tries to update the current sample along the direction of $hat{mathbf{g}}$, but considers switching to its opposite direction $-hat{mathbf{g}}$ if our algorithm detects that it does not increase the value of the attack objective function. We justify the choice of switching to the opposite direction by a local approximate linearity assumption. In SWITCH, only one or two queries are needed per iteration, but it is still effective due to the rich information provided by the transferable gradient, thereby resulting in unprecedented query efficiency. To improve the robustness of SWITCH, we further propose SWITCH$_text{RGF}$ in which the update follows the direction of a random gradient-free (RGF) estimate when neither $hat{mathbf{g}}$ nor its opposite direction can increase the objective, while maintaining the advantage of SWITCH in terms of query efficiency. Experimental results conducted on CIFAR-10, CIFAR-100 and TinyImageNet show that compared with other methods, SWITCH achieves a satisfactory attack success rate using much fewer queries, and SWITCH$_text{RGF}$ achieves the state-of-the-art attack success rate with fewer queries overall. Our approach can serve as a strong baseline for future black-box attacks because of its simplicity. The PyTorch source code is released on https://github.com/machanic/SWITCH.
We present HandGAN (H-GAN), a cycle-consistent adversarial learning approach implementing multi-scale perceptual discriminators. It is designed to translate synthetic images of hands to the real domain. Synthetic hands provide complete ground-truth annotations, yet they are not representative of the target distribution of real-world data. We strive to provide the perfect blend of a realistic hand appearance with synthetic annotations. Relying on image-to-image translation, we improve the appearance of synthetic hands to approximate the statistical distribution underlying a collection of real images of hands. H-GAN tackles not only the cross-domain tone mapping but also structural differences in localized areas such as shading discontinuities. Results are evaluated on a qualitative and quantitative basis improving previous works. Furthermore, we relied on the hand classification task to claim our generated hands are statistically similar to the real domain of hands.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا