No Arabic abstract
Digital watermarking has been widely used to protect the copyright and integrity of multimedia data. Previous studies mainly focus on designing watermarking techniques that are robust to attacks of destroying the embedded watermarks. However, the emerging deep learning based image generation technology raises new open issues that whether it is possible to generate fake watermarked images for circumvention. In this paper, we make the first attempt to develop digital image watermark fakers by using generative adversarial learning. Suppose that a set of paired images of original and watermarked images generated by the targeted watermarker are available, we use them to train a watermark faker with U-Net as the backbone, whose input is an original image, and after a domain-specific preprocessing, it outputs a fake watermarked image. Our experiments show that the proposed watermark faker can effectively crack digital image watermarkers in both spatial and frequency domains, suggesting the risk of such forgery attacks.
Digital watermarking is the act of hiding information in multimedia data, for the purposes of content protection or authentication. In ordinary digital watermarking, the secret information is embedded into the multimedia data (cover data) with minimum distortion of the cover data. Due to these watermarking techniques the watermark image is almost negligible visible. In this paper we will discuss about various techniques of Digital Watermarking techniques in spatial and frequency domains
Recently, a self-embedding fragile watermark scheme based on reference-bits interleaving and adaptive selection of embedding mode was proposed. Reference bits are derived from the scrambled MSB bits of a cover image, and then are combined with authentication bits to form the watermark bits for LSB embedding. We find this algorithm has a feature of block independence of embedding watermark such that it is vulnerable to a collage attack. In addition, because the generation of authentication bits via hash function operations is not related to secret keys, we analyze this algorithm by a multiple stego-image attack. We find that the cost of obtaining all the permutation relations of $lcdot b^2$ watermark bits of each block (i.e., equivalent permutation keys) is about $(lcdot b^2)!$ for the embedding mode $(m, l)$, where $m$ MSB layers of a cover image are used for generating reference bits and $l$ LSB layers for embedding watermark, and $btimes b$ is the size of image block. The simulation results and the statistical results demonstrate our analysis is effective.
Ongoing standardization in Industry 4.0 supports tool vendor neutral representations of Piping and Instrumentation diagrams as well as 3D pipe routing. However, a complete digital plant model requires combining these two representations. 3D pipe routing information is essential for building any accurate first-principles process simulation model. Piping and instrumentation diagrams are the primary source for control loops. In order to automatically integrate these information sources to a unified digital plant model, it is necessary to develop algorithms for identifying corresponding elements such as tanks and pumps from piping and instrumentation diagrams and 3D CAD models. One approach is to raise these two information sources to a common level of abstraction and to match them at this level of abstraction. Graph matching is a potential technique for this purpose. This article focuses on automatic generation of the graphs as a prerequisite to graph matching. Algorithms for this purpose are proposed and validated with a case study. The paper concludes with a discussion of further research needed to reprocess the generated graphs in order to enable effective matching.
Image forensic plays a crucial role in both criminal investigations (e.g., dissemination of fake images to spread racial hate or false narratives about specific ethnicity groups) and civil litigation (e.g., defamation). Increasingly, machine learning approaches are also utilized in image forensics. However, there are also a number of limitations and vulnerabilities associated with machine learning-based approaches, for example how to detect adversarial (image) examples, with real-world consequences (e.g., inadmissible evidence, or wrongful conviction). Therefore, with a focus on image forensics, this paper surveys techniques that can be used to enhance the robustness of machine learning-based binary manipulation detectors in various adversarial scenarios.
Recent research has demonstrated that adding some imperceptible perturbations to original images can fool deep learning models. However, the current adversarial perturbations are usually shown in the form of noises, and thus have no practical meaning. Image watermark is a technique widely used for copyright protection. We can regard image watermark as a king of meaningful noises and adding it to the original image will not affect peoples understanding of the image content, and will not arouse peoples suspicion. Therefore, it will be interesting to generate adversarial examples using watermarks. In this paper, we propose a novel watermark perturbation for adversarial examples (Adv-watermark) which combines image watermarking techniques and adversarial example algorithms. Adding a meaningful watermark to the clean images can attack the DNN models. Specifically, we propose a novel optimization algorithm, which is called Basin Hopping Evolution (BHE), to generate adversarial watermarks in the black-box attack mode. Thanks to the BHE, Adv-watermark only requires a few queries from the threat models to finish the attacks. A series of experiments conducted on ImageNet and CASIA-WebFace datasets show that the proposed method can efficiently generate adversarial examples, and outperforms the state-of-the-art attack methods. Moreover, Adv-watermark is more robust against image transformation defense methods.