ﻻ يوجد ملخص باللغة العربية
Machine learning (ML) models are applied in an increasing variety of domains. The availability of large amounts of data and computational resources encourages the development of ever more complex and valuable models. These models are considered intellectual property of the legitimate parties who have trained them, which makes their protection against stealing, illegitimate redistribution, and unauthorized application an urgent need. Digital watermarking presents a strong mechanism for marking model ownership and, thereby, offers protection against those threats. The emergence of numerous watermarking schemes and attacks against them is pushed forward by both academia and industry, which motivates a comprehensive survey on this field. This document at hand provides the first extensive literature review on ML model watermarking schemes and attacks against them. It offers a taxonomy of existing approaches and systemizes general knowledge around them. Furthermore, it assembles the security requirements to watermarking approaches and evaluates schemes published by the scientific community according to them in order to present systematic shortcomings and vulnerabilities. Thus, it can not only serve as valuable guidance in choosing the appropriate scheme for specific scenarios, but also act as an entry point into developing new mechanisms that overcome presented shortcomings, and thereby contribute in advancing the field.
Many learning tasks require us to deal with graph data which contains rich relational information among elements, leading increasing graph neural network (GNN) models to be deployed in industrial products for improving the quality of service. However
Data hiding is the art of concealing messages with limited perceptual changes. Recently, deep learning has provided enriching perspectives for it and made significant progress. In this work, we conduct a brief yet comprehensive review of existing lit
Protecting the Intellectual Property Rights (IPR) associated to Deep Neural Networks (DNNs) is a pressing need pushed by the high costs required to train such networks and the importance that DNNs are gaining in our society. Following its use for Mul
Watermarking of deep neural networks (DNN) can enable their tracing once released by a data owner. In this paper, we generalize white-box watermarking algorithms for DNNs, where the data owner needs white-box access to the model to extract the waterm
Training machine learning (ML) models is expensive in terms of computational power, amounts of labeled data and human expertise. Thus, ML models constitute intellectual property (IP) and business value for their owners. Embedding digital watermarks d