Do you want to publish a course? Click here

A Survey on Model Watermarking Neural Networks

114   0   0.0 ( 0 )
 Added by Franziska Boenisch
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Machine learning (ML) models are applied in an increasing variety of domains. The availability of large amounts of data and computational resources encourages the development of ever more complex and valuable models. These models are considered intellectual property of the legitimate parties who have trained them, which makes their protection against stealing, illegitimate redistribution, and unauthorized application an urgent need. Digital watermarking presents a strong mechanism for marking model ownership and, thereby, offers protection against those threats. The emergence of numerous watermarking schemes and attacks against them is pushed forward by both academia and industry, which motivates a comprehensive survey on this field. This document at hand provides the first extensive literature review on ML model watermarking schemes and attacks against them. It offers a taxonomy of existing approaches and systemizes general knowledge around them. Furthermore, it assembles the security requirements to watermarking approaches and evaluates schemes published by the scientific community according to them in order to present systematic shortcomings and vulnerabilities. Thus, it can not only serve as valuable guidance in choosing the appropriate scheme for specific scenarios, but also act as an entry point into developing new mechanisms that overcome presented shortcomings, and thereby contribute in advancing the field.



rate research

Read More

Many learning tasks require us to deal with graph data which contains rich relational information among elements, leading increasing graph neural network (GNN) models to be deployed in industrial products for improving the quality of service. However, they also raise challenges to model authentication. It is necessary to protect the ownership of the GNN models, which motivates us to present a watermarking method to GNN models in this paper. In the proposed method, an Erdos-Renyi (ER) random graph with random node feature vectors and labels is randomly generated as a trigger to train the GNN to be protected together with the normal samples. During model training, the secret watermark is embedded into the label predictions of the ER graph nodes. During model verification, by activating a marked GNN with the trigger ER graph, the watermark can be reconstructed from the output to verify the ownership. Since the ER graph was randomly generated, by feeding it to a non-marked GNN, the label predictions of the graph nodes are random, resulting in a low false alarm rate (of the proposed work). Experimental results have also shown that, the performance of a marked GNN on its original task will not be impaired. Moreover, it is robust against model compression and fine-tuning, which has shown the superiority and applicability.
Data hiding is the art of concealing messages with limited perceptual changes. Recently, deep learning has provided enriching perspectives for it and made significant progress. In this work, we conduct a brief yet comprehensive review of existing literature and outline three meta-architectures. Based on this, we summarize specific strategies for various applications of deep hiding, including steganography, light field messaging and watermarking. Finally, further insight into deep hiding is provided through incorporating the perspective of adversarial attack.
Protecting the Intellectual Property Rights (IPR) associated to Deep Neural Networks (DNNs) is a pressing need pushed by the high costs required to train such networks and the importance that DNNs are gaining in our society. Following its use for Multimedia (MM) IPR protection, digital watermarking has recently been considered as a mean to protect the IPR of DNNs. While DNN watermarking inherits some basic concepts and methods from MM watermarking, there are significant differences between the two application areas, calling for the adaptation of media watermarking techniques to the DNN scenario and the development of completely new methods. In this paper, we overview the most recent advances in DNN watermarking, by paying attention to cast it into the bulk of watermarking theory developed during the last two decades, while at the same time highlighting the new challenges and opportunities characterizing DNN watermarking. Rather than trying to present a comprehensive description of all the methods proposed so far, we introduce a new taxonomy of DNN watermarking and present a few exemplary methods belonging to each class. We hope that this paper will inspire new research in this exciting area and will help researchers to focus on the most innovative and challenging problems in the field.
Watermarking of deep neural networks (DNN) can enable their tracing once released by a data owner. In this paper, we generalize white-box watermarking algorithms for DNNs, where the data owner needs white-box access to the model to extract the watermark. White-box watermarking algorithms have the advantage that they do not impact the accuracy of the watermarked model. We propose Robust whIte-box GAn watermarking (RIGA), a novel white-box watermarking algorithm that uses adversarial training. Our extensive experiments demonstrate that the proposed watermarking algorithm not only does not impact accuracy, but also significantly improves the covertness and robustness over the current state-of-art.
Training machine learning (ML) models is expensive in terms of computational power, amounts of labeled data and human expertise. Thus, ML models constitute intellectual property (IP) and business value for their owners. Embedding digital watermarks during model training allows a model owner to later identify their models in case of theft or misuse. However, model functionality can also be stolen via model extraction, where an adversary trains a surrogate model using results returned from a prediction API of the original model. Recent work has shown that model extraction is a realistic threat. Existing watermarking schemes are ineffective against IP theft via model extraction since it is the adversary who trains the surrogate model. In this paper, we introduce DAWN (Dynamic Adversarial Watermarking of Neural Networks), the first approach to use watermarking to deter model extraction IP theft. Unlike prior watermarking schemes, DAWN does not impose changes to the training process but it operates at the prediction API of the protected model, by dynamically changing the responses for a small subset of queries (e.g., <0.5%) from API clients. This set is a watermark that will be embedded in case a client uses its queries to train a surrogate model. We show that DAWN is resilient against two state-of-the-art model extraction attacks, effectively watermarking all extracted surrogate models, allowing model owners to reliably demonstrate ownership (with confidence $>1- 2^{-64}$), incurring negligible loss of prediction accuracy (0.03-0.5%).

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا