ترغب بنشر مسار تعليمي؟ اضغط هنا

The Effective Key Length of Watermarking Schemes

162   0   0.0 ( 0 )
 نشر من قبل Patrick Bas Dr
 تاريخ النشر 2012
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Whereas the embedding distortion, the payload and the robustness of digital watermarking schemes are well understood, the notion of security is still not completely well defined. The approach proposed in the last five years is too theoretical and solely considers the embedding process, which is half of the watermarking scheme. This paper proposes a new measurement of watermarking security, called the effective key length, which captures the difficulty for the adversary to get access to the watermarking channel. This new methodology is applied to additive spread spectrum schemes where theoretical and practical computations of the effective key length are proposed. It shows that these schemes are not secure as soon as the adversary gets observations in the Known Message Attack context.



قيم البحث

اقرأ أيضاً

66 - A. Dasso , A. Funes , D. Riesco 2020
There are several methods to measure computing power. On the other hand, Bit Length (BL) can be considered a metric to measure the strength of an asymmetric encryption method. We review here ways to determine the security, given an span of time, of a factoring-based encryption method, such as RSA, by establishing a relation between the processing power needed to break a given encryption and the given bit length used in the encryption. This relation would help us provide an estimation of the time span that an encryption method for a given BL will be secure from attacks.
69 - Jing Yang , Fang-Wei Fu 2020
Secret sharing was proposed primarily in 1979 to solve the problem of key distribution. In recent decades, researchers have proposed many improvement schemes. Among all these schemes, the verifiable multi-secret sharing (VMSS) schemes are studied suf ficiently, which share multiple secrets simultaneously and perceive malicious dealer as well as participants. By pointing out that the schemes presented by Dehkordi and Mashhadi in 2008 cannot detect some vicious behaviors of the dealer, we propose two new VMSS schemes by adding validity check in the verification phase to overcome this drawback. Our new schemes are based on XTR public key system, and can realize $GF(p^{6})$ security by computations in $GF(p^{2})$ without explicit constructions of $GF(p^{6})$, where $p$ is a prime. Compared with the VMSS schemes using RSA and linear feedback shift register (LFSR) public key cryptosystems, our schemes can achieve the same security level with shorter parameters by using trace function. Whats more, our schemes are much simpler to operate than those schemes based on Elliptic Curve Cryptography (ECC). In addition, our schemes are dynamic and threshold changeable, which means that it is efficient to implement our schemes according to the actual situation when participants, secrets or the threshold needs to be changed.
Federated learning is a distributed learning technique where machine learning models are trained on client devices in which the local training data resides. The training is coordinated via a central server which is, typically, controlled by the inten ded owner of the resulting model. By avoiding the need to transport the training data to the central server, federated learning improves privacy and efficiency. But it raises the risk of model theft by clients because the resulting model is available on every client device. Even if the application software used for local training may attempt to prevent direct access to the model, a malicious client may bypass any such restrictions by reverse engineering the application software. Watermarking is a well-known deterrence method against model theft by providing the means for model owners to demonstrate ownership of their models. Several recent deep neural network (DNN) watermarking techniques use backdooring: training the models with additional mislabeled data. Backdooring requires full access to the training data and control of the training process. This is feasible when a single party trains the model in a centralized manner, but not in a federated learning setting where the training process and training data are distributed among several client devices. In this paper, we present WAFFLE, the first approach to watermark DNN models trained using federated learning. It introduces a retraining step at the server after each aggregation of local models into the global model. We show that WAFFLE efficiently embeds a resilient watermark into models incurring only negligible degradation in test accuracy (-0.17%), and does not require access to training data. We also introduce a novel technique to generate the backdoor used as a watermark. It outperforms prior techniques, imposing no communication, and low computational (+3.2%) overhead.
Training machine learning (ML) models is expensive in terms of computational power, amounts of labeled data and human expertise. Thus, ML models constitute intellectual property (IP) and business value for their owners. Embedding digital watermarks d uring model training allows a model owner to later identify their models in case of theft or misuse. However, model functionality can also be stolen via model extraction, where an adversary trains a surrogate model using results returned from a prediction API of the original model. Recent work has shown that model extraction is a realistic threat. Existing watermarking schemes are ineffective against IP theft via model extraction since it is the adversary who trains the surrogate model. In this paper, we introduce DAWN (Dynamic Adversarial Watermarking of Neural Networks), the first approach to use watermarking to deter model extraction IP theft. Unlike prior watermarking schemes, DAWN does not impose changes to the training process but it operates at the prediction API of the protected model, by dynamically changing the responses for a small subset of queries (e.g., <0.5%) from API clients. This set is a watermark that will be embedded in case a client uses its queries to train a surrogate model. We show that DAWN is resilient against two state-of-the-art model extraction attacks, effectively watermarking all extracted surrogate models, allowing model owners to reliably demonstrate ownership (with confidence $>1- 2^{-64}$), incurring negligible loss of prediction accuracy (0.03-0.5%).
Protecting the Intellectual Property Rights (IPR) associated to Deep Neural Networks (DNNs) is a pressing need pushed by the high costs required to train such networks and the importance that DNNs are gaining in our society. Following its use for Mul timedia (MM) IPR protection, digital watermarking has recently been considered as a mean to protect the IPR of DNNs. While DNN watermarking inherits some basic concepts and methods from MM watermarking, there are significant differences between the two application areas, calling for the adaptation of media watermarking techniques to the DNN scenario and the development of completely new methods. In this paper, we overview the most recent advances in DNN watermarking, by paying attention to cast it into the bulk of watermarking theory developed during the last two decades, while at the same time highlighting the new challenges and opportunities characterizing DNN watermarking. Rather than trying to present a comprehensive description of all the methods proposed so far, we introduce a new taxonomy of DNN watermarking and present a few exemplary methods belonging to each class. We hope that this paper will inspire new research in this exciting area and will help researchers to focus on the most innovative and challenging problems in the field.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا