ترغب بنشر مسار تعليمي؟ اضغط هنا

Security analysis of a self-embedding fragile image watermark scheme

71   0   0.0 ( 0 )
 نشر من قبل Feng Yu
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Recently, a self-embedding fragile watermark scheme based on reference-bits interleaving and adaptive selection of embedding mode was proposed. Reference bits are derived from the scrambled MSB bits of a cover image, and then are combined with authentication bits to form the watermark bits for LSB embedding. We find this algorithm has a feature of block independence of embedding watermark such that it is vulnerable to a collage attack. In addition, because the generation of authentication bits via hash function operations is not related to secret keys, we analyze this algorithm by a multiple stego-image attack. We find that the cost of obtaining all the permutation relations of $lcdot b^2$ watermark bits of each block (i.e., equivalent permutation keys) is about $(lcdot b^2)!$ for the embedding mode $(m, l)$, where $m$ MSB layers of a cover image are used for generating reference bits and $l$ LSB layers for embedding watermark, and $btimes b$ is the size of image block. The simulation results and the statistical results demonstrate our analysis is effective.



قيم البحث

اقرأ أيضاً

Recent research has demonstrated that adding some imperceptible perturbations to original images can fool deep learning models. However, the current adversarial perturbations are usually shown in the form of noises, and thus have no practical meaning . Image watermark is a technique widely used for copyright protection. We can regard image watermark as a king of meaningful noises and adding it to the original image will not affect peoples understanding of the image content, and will not arouse peoples suspicion. Therefore, it will be interesting to generate adversarial examples using watermarks. In this paper, we propose a novel watermark perturbation for adversarial examples (Adv-watermark) which combines image watermarking techniques and adversarial example algorithms. Adding a meaningful watermark to the clean images can attack the DNN models. Specifically, we propose a novel optimization algorithm, which is called Basin Hopping Evolution (BHE), to generate adversarial watermarks in the black-box attack mode. Thanks to the BHE, Adv-watermark only requires a few queries from the threat models to finish the attacks. A series of experiments conducted on ImageNet and CASIA-WebFace datasets show that the proposed method can efficiently generate adversarial examples, and outperforms the state-of-the-art attack methods. Moreover, Adv-watermark is more robust against image transformation defense methods.
In this paper, a word based chaotic image encryption scheme for gray images is proposed, that can be used in both synchronous and self-synchronous modes. The encryption scheme operates in a finite field where we have also analyzed its performance acc ording to numerical precision used in implementation. We show that the scheme not only passes a variety of security tests, but also it is verified that the proposed scheme operates faster than other existing schemes of the same type even when using lightweight short key sizes.
Recently, an image encryption scheme based on a compound chaotic sequence was proposed. In this paper, the security of the scheme is studied and the following problems are found: (1) a differential chosen-plaintext attack can break the scheme with on ly three chosen plain-images; (2) there is a number of weak keys and some equivalent keys for encryption; (3) the scheme is not sensitive to the changes of plain-images; and (4) the compound chaotic sequence does not work as a good random number resource.
Convolutional Neural Networks (CNNs) deployed in real-life applications such as autonomous vehicles have shown to be vulnerable to manipulation attacks, such as poisoning attacks and fine-tuning. Hence, it is essential to ensure the integrity and aut henticity of CNNs because compromised models can produce incorrect outputs and behave maliciously. In this paper, we propose a self-contained tamper-proofing method, called DeepiSign, to ensure the integrity and authenticity of CNN models against such manipulation attacks. DeepiSign applies the idea of fragile invisible watermarking to securely embed a secret and its hash value into a CNN model. To verify the integrity and authenticity of the model, we retrieve the secret from the model, compute the hash value of the secret, and compare it with the embedded hash value. To minimize the effects of the embedded secret on the CNN model, we use a wavelet-based technique to transform weights into the frequency domain and embed the secret into less significant coefficients. Our theoretical analysis shows that DeepiSign can hide up to 1KB secret in each layer with minimal loss of the models accuracy. To evaluate the security and performance of DeepiSign, we performed experiments on four pre-trained models (ResNet18, VGG16, AlexNet, and MobileNet) using three datasets (MNIST, CIFAR-10, and Imagenet) against three types of manipulation attacks (targeted input poisoning, output poisoning, and fine-tuning). The results demonstrate that DeepiSign is verifiable without degrading the classification accuracy, and robust against representative CNN manipulation attacks.
Digital watermarking has been widely used to protect the copyright and integrity of multimedia data. Previous studies mainly focus on designing watermarking techniques that are robust to attacks of destroying the embedded watermarks. However, the eme rging deep learning based image generation technology raises new open issues that whether it is possible to generate fake watermarked images for circumvention. In this paper, we make the first attempt to develop digital image watermark fakers by using generative adversarial learning. Suppose that a set of paired images of original and watermarked images generated by the targeted watermarker are available, we use them to train a watermark faker with U-Net as the backbone, whose input is an original image, and after a domain-specific preprocessing, it outputs a fake watermarked image. Our experiments show that the proposed watermark faker can effectively crack digital image watermarkers in both spatial and frequency domains, suggesting the risk of such forgery attacks.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا