No Arabic abstract
This paper presents a novel fingerprinting scheme for the Intellectual Property (IP) protection of Generative Adversarial Networks (GANs). Prior solutions for classification models adopt adversarial examples as the fingerprints, which can raise stealthiness and robustness problems when they are applied to the GAN models. Our scheme constructs a composite deep learning model from the target GAN and a classifier. Then we generate stealthy fingerprint samples from this composite model, and register them to the classifier for effective ownership verification. This scheme inspires three concrete methodologies to practically protect the modern GAN models. Theoretical analysis proves that these methods can satisfy different security requirements necessary for IP protection. We also conduct extensive experiments to show that our solutions outperform existing strategies in terms of stealthiness, functionality-preserving and unremovability.
Ever since Machine Learning as a Service (MLaaS) emerges as a viable business that utilizes deep learning models to generate lucrative revenue, Intellectual Property Right (IPR) has become a major concern because these deep learning models can easily be replicated, shared, and re-distributed by any unauthorized third parties. To the best of our knowledge, one of the prominent deep learning models - Generative Adversarial Networks (GANs) which has been widely used to create photorealistic image are totally unprotected despite the existence of pioneering IPR protection methodology for Convolutional Neural Networks (CNNs). This paper therefore presents a complete protection framework in both black-box and white-box settings to enforce IPR protection on GANs. Empirically, we show that the proposed method does not compromise the original GANs performance (i.e. image generation, image super-resolution, style transfer), and at the same time, it is able to withstand both removal and ambiguity attacks against embedded watermarks.
Conditional generative adversarial networks (cGAN) have led to large improvements in the task of conditional image generation, which lies at the heart of computer vision. The major focus so far has been on performance improvement, while there has been little effort in making cGAN more robust to noise. The regression (of the generator) might lead to arbitrarily large errors in the output, which makes cGAN unreliable for real-world applications. In this work, we introduce a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue. Our model augments the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold even in the presence of intense noise. We prove that RoCGAN share similar theoretical properties as GAN and experimentally verify that our model outperforms existing state-of-the-art cGAN architectures by a large margin in a variety of domains including images from natural scenes and faces.
Physical layer authentication relies on detecting unique imperfections in signals transmitted by radio devices to isolate their fingerprint. Recently, deep learning-based authenticators have increasingly been proposed to classify devices using these fingerprints, as they achieve higher accuracies compared to traditional approaches. However, it has been shown in other domains that adding carefully crafted perturbations to legitimate inputs can fool such classifiers. This can undermine the security provided by the authenticator. Unlike adversarial attacks applied in other domains, an adversary has no control over the propagation environment. Therefore, to investigate the severity of this type of attack in wireless communications, we consider an unauthorized transmitter attempting to have its signals classified as authorized by a deep learning-based authenticator. We demonstrate a reinforcement learning-based attack where the impersonator--using only the authenticators binary authentication decision--distorts its signals in order to penetrate the system. Extensive simulations and experiments on a software-defined radio testbed indicate that at appropriate channel conditions and bounded by a maximum distortion level, it is possible to fool the authenticator reliably at more than 90% success rate.
In the $left( {t,n} right)$ threshold quantum secret sharing scheme, it is difficult to ensure that internal participants are honest. In this paper, a verifiable $left( {t,n} right)$ threshold quantum secret sharing scheme is designed combined with classical secret sharing scheme. First of all, the distributor uses the asymmetric binary polynomials to generate the shares and sends them to each participant. Secondly, the distributor sends the initial quantum state with the secret to the first participant, and each participant performs unitary operation that using the mutually unbiased bases on the obtained $d$ dimension single bit quantum state ($d$ is a large odd prime number). In this process, distributor can randomly check the participants, and find out the internal fraudsters by unitary inverse operation gradually upward. Then the secret is reconstructed after all other participants simultaneously public transmission. Security analysis show that this scheme can resist both external and internal attacks.
In order to receive personalized services, individuals share their personal data with a wide range of service providers, hoping that their data will remain confidential. Thus, in case of an unauthorized distribution of their personal data by these service providers (or in case of a data breach) data owners want to identify the source of such data leakage. Digital fingerprinting schemes have been developed to embed a hidden and unique fingerprint into shared digital content, especially multimedia, to provide such liability guarantees. However, existing techniques utilize the high redundancy in the content, which is typically not included in personal data. In this work, we propose a probabilistic fingerprinting scheme that efficiently generates the fingerprint by considering a fingerprinting probability (to keep the data utility high) and publicly known inherent correlations between data points. To improve the robustness of the proposed scheme against colluding malicious service providers, we also utilize the Boneh-Shaw fingerprinting codes as a part of the proposed scheme. Furthermore, observing similarities between privacy-preserving data sharing techniques (that add controlled noise to the shared data) and the proposed fingerprinting scheme, we make a first attempt to develop a data sharing scheme that provides both privacy and fingerprint robustness at the same time. We experimentally show that fingerprint robustness and privacy have conflicting objectives and we propose a hybrid approach to control such a trade-off with a design parameter. Using the proposed hybrid approach, we show that individuals can improve their level of privacy by slightly compromising from the fingerprint robustness. We implement and evaluate the performance of the proposed scheme on real genomic data. Our experimental results show the efficiency and robustness of the proposed scheme.