Do you want to publish a course? Click here

Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attack

280   0   0.0 ( 0 )
 Added by Chee Seng Chan
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Ever since Machine Learning as a Service (MLaaS) emerges as a viable business that utilizes deep learning models to generate lucrative revenue, Intellectual Property Right (IPR) has become a major concern because these deep learning models can easily be replicated, shared, and re-distributed by any unauthorized third parties. To the best of our knowledge, one of the prominent deep learning models - Generative Adversarial Networks (GANs) which has been widely used to create photorealistic image are totally unprotected despite the existence of pioneering IPR protection methodology for Convolutional Neural Networks (CNNs). This paper therefore presents a complete protection framework in both black-box and white-box settings to enforce IPR protection on GANs. Empirically, we show that the proposed method does not compromise the original GANs performance (i.e. image generation, image super-resolution, style transfer), and at the same time, it is able to withstand both removal and ambiguity attacks against embedded watermarks.



rate research

Read More

345 - Guanlin Li , Guowen Xu , Han Qiu 2021
This paper presents a novel fingerprinting scheme for the Intellectual Property (IP) protection of Generative Adversarial Networks (GANs). Prior solutions for classification models adopt adversarial examples as the fingerprints, which can raise stealthiness and robustness problems when they are applied to the GAN models. Our scheme constructs a composite deep learning model from the target GAN and a classifier. Then we generate stealthy fingerprint samples from this composite model, and register them to the classifier for effective ownership verification. This scheme inspires three concrete methodologies to practically protect the modern GAN models. Theoretical analysis proves that these methods can satisfy different security requirements necessary for IP protection. We also conduct extensive experiments to show that our solutions outperform existing strategies in terms of stealthiness, functionality-preserving and unremovability.
Conditional generative adversarial networks (cGAN) have led to large improvements in the task of conditional image generation, which lies at the heart of computer vision. The major focus so far has been on performance improvement, while there has been little effort in making cGAN more robust to noise. The regression (of the generator) might lead to arbitrarily large errors in the output, which makes cGAN unreliable for real-world applications. In this work, we introduce a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue. Our model augments the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold even in the presence of intense noise. We prove that RoCGAN share similar theoretical properties as GAN and experimentally verify that our model outperforms existing state-of-the-art cGAN architectures by a large margin in a variety of domains including images from natural scenes and faces.
351 - Mingfu Xue , Zhiyu Wu , Jian Wang 2021
A well-trained DNN model can be regarded as an intellectual property (IP) of the model owner. To date, many DNN IP protection methods have been proposed, but most of them are watermarking based verification methods where model owners can only verify their ownership passively after the copyright of DNN models has been infringed. In this paper, we propose an effective framework to actively protect the DNN IP from infringement. Specifically, we encrypt the DNN models parameters by perturbing them with well-crafted adversarial perturbations. With the encrypted parameters, the accuracy of the DNN model drops significantly, which can prevent malicious infringers from using the model. After the encryption, the positions of encrypted parameters and the values of the added adversarial perturbations form a secret key. Authorized user can use the secret key to decrypt the model. Compared with the watermarking methods which only passively verify the ownership after the infringement occurs, the proposed method can prevent infringement in advance. Moreover, compared with most of the existing active DNN IP protection methods, the proposed method does not require additional training process of the model, which introduces low computational overhead. Experimental results show that, after the encryption, the test accuracy of the model drops by 80.65%, 81.16%, and 87.91% on Fashion-MNIST, CIFAR-10, and GTSRB, respectively. Moreover, the proposed method only needs to encrypt an extremely low number of parameters, and the proportion of the encrypted parameters of all the models parameters is as low as 0.000205%. The experimental results also indicate that, the proposed method is robust against model fine-tuning attack and model pruning attack. Moreover, for the adaptive attack where attackers know the detailed steps of the proposed method, the proposed method is also demonstrated to be robust.
Training high performance Deep Neural Networks (DNNs) models require large-scale and high-quality datasets. The expensive cost of collecting and annotating large-scale datasets make the valuable datasets can be considered as the Intellectual Property (IP) of the dataset owner. To date, almost all the copyright protection schemes for deep learning focus on the copyright protection of models, while the copyright protection of the dataset is rarely studied. In this paper, we propose a novel method to actively protect the dataset from being used to train DNN models without authorization. Experimental results on on CIFAR-10 and TinyImageNet datasets demonstrate the effectiveness of the proposed method. Compared with the model trained on clean dataset, the proposed method can effectively make the test accuracy of the unauthorized model trained on protected dataset drop from 86.21% to 38.23% and from 74.00% to 16.20% on CIFAR-10 and TinyImageNet datasets, respectively.
This paper presents a high-level circuit obfuscation technique to prevent the theft of intellectual property (IP) of integrated circuits. In particular, our technique protects a class of circuits that relies on constant multiplications, such as filters and neural networks, where the constants themselves are the IP to be protected. By making use of decoy constants and a key-based scheme, a reverse engineer adversary at an untrusted foundry is rendered incapable of discerning true constants from decoy constants. The time-multiplexed constant multiplication (TMCM) block of such circuits, which realizes the multiplication of an input variable by a constant at a time, is considered as our case study for obfuscation. Furthermore, two TMCM design architectures are taken into account; an implementation using a multiplier and a multiplierless shift-adds implementation. Optimization methods are also applied to reduce the hardware complexity of these architectures. The well-known satisfiability (SAT) and automatic test pattern generation (ATPG) attacks are used to determine the vulnerability of the obfuscated designs. It is observed that the proposed technique incurs small overheads in area, power, and delay that are comparable to the hardware complexity of prominent logic locking methods. Yet, the advantage of our approach is in the insight that constants -- instead of arbitrary circuit nodes -- become key-protected.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا