Do you want to publish a course? Click here

Natural Steganography: cover-source switching for better steganography

335   0   0.0 ( 0 )
 Added by Patrick Bas Dr
 Publication date 2016
and research's language is English
 Authors Patrick Bas




Ask ChatGPT about the research

This paper proposes a new steganographic scheme relying on the principle of cover-source switching, the key idea being that the embedding should switch from one cover-source to another. The proposed implementation, called Natural Steganography, considers the sensor noise naturally present in the raw images and uses the principle that, by the addition of a specific noise the steganographic embedding tries to mimic a change of ISO sensitivity. The embedding methodology consists in 1) perturbing the image in the raw domain, 2) modeling the perturbation in the processed domain, 3) embedding the payload in the processed domain. We show that this methodology is easily tractable whenever the processes are known and enables to embed large and undetectable payloads. We also show that already used heuristics such as synchronization of embedding changes or detectability after rescaling can be respectively explained by operations such as color demosaicing and down-scaling kernels.

rate research

Read More

146 - Zhuo Zhang , Jia Liu , Yan Ke 2018
In this paper, a novel data-driven information hiding scheme called generative steganography by sampling (GSS) is proposed. Unlike in traditional modification-based steganography, in our method the stego image is directly sampled by a powerful generator: no explicit cover is used. Both parties share a secret key used for message embedding and extraction. The Jensen-Shannon divergence is introduced as a new criterion for evaluating the security of generative steganography. Based on these principles, we propose a simple practical generative steganography method that uses semantic image inpainting. The message is written in advance to an uncorrupted region that needs to be retained in the corrupted image. Then, the corrupted image with the secret message is fed into a Generator trained by a generative adversarial network (GAN) for semantic completion. Message loss and prior loss terms are proposed for penalizing message extraction error and unrealistic stego image. In our design, we first train a generator whose training target is the generation of new data samples from the same distribution as that of existing training data. Next, for the trained generator, backpropagation to the message and prior loss are introduced to optimize the coding of the input noise data for the generator. The presented experiments demonstrate the potential of the proposed framework based on both qualitative and quantitative evaluations of the generated stego images.
We propose an image steganographic algorithm called EncryptGAN, which disguises private image communication in an open communication channel. The insight is that content transform between two very different domains (e.g., face to flower) allows one to hide image messages in one domain (face) and communicate using its counterpart in another domain (flower). The key ingredient in our method, unlike related approaches, is a specially trained network to extract transformed images from both domains and use them as the public and private keys. We ensure the image communication remain secret except for the intended recipient even when the content transformation networks are exposed. To communicate, one directly pastes the `message image onto a larger public key image (face). Depending on the location and content of the message image, the `disguise image (flower) alters its appearance and shape while maintaining its overall objectiveness (flower). The recipient decodes the alternated image to uncover the original image message using its message image key. We implement the entire procedure as a constrained Cycle-GAN, where the public and the private key generating network is used as an additional constraint to the cycle consistency. Comprehensive experimental results show our EncryptGAN outperforms the state-of-arts in terms of both encryption and security measures.
61 - Yan Ke , Minqing Zhang , Jia Liu 2017
The distortion in steganography that usually comes from the modification or recoding on the cover image during the embedding process leaves the steganalyzer with possibility of discriminating. Faced with such a risk, we propose generative steganography with Kerckhoffs principle (GSK) in this letter. In GSK, the secret messages are generated by a cover image using a generator rather than embedded into the cover, thus resulting in no modifications in the cover. To ensure the security, the generators are trained to meet Kerckhoffs principle based on generative adversarial networks (GAN). Everything about the GSK system, except the extraction key, is public knowledge for the receivers. The secret messages can be outputted by the generator if and only if the extraction key and the cover image are both inputted. In the generator training procedures, there are two GANs, Message- GAN and Cover-GAN, designed to work jointly making the generated results under the control of the extraction key and the cover image. We provide experimental results on the training process and give an example of the working process by adopting a generator trained on MNIST, which demonstrate that GSK can use a cover image without any modification to generate messages, and without the extraction key or the cover image, only meaningless results would be obtained.
Whereas traditional cryptography encrypts a secret message into an unintelligible form, steganography conceals that communication is taking place by encoding a secret message into a cover signal. Language is a particularly pragmatic cover signal due to its benign occurrence and independence from any one medium. Traditionally, linguistic steganography systems encode secret messages in existing text via synonym substitution or word order rearrangements. Advances in neural language models enable previously impractical generation-based techniques. We propose a steganography technique based on arithmetic coding with large-scale neural language models. We find that our approach can generate realistic looking cover sentences as evaluated by humans, while at the same time preserving security by matching the cover message distribution with the language model distribution.
Steganography comprises the mechanics of hiding data in a host media that may be publicly available. While previous works focused on unimodal setups (e.g., hiding images in images, or hiding audio in audio), PixInWav targets the multimodal case of hiding images in audio. To this end, we propose a novel residual architecture operating on top of short-time discrete cosine transform (STDCT) audio spectrograms. Among our results, we find that the residual audio steganography setup we propose allows independent encoding of the hidden image from the host audio without compromising quality. Accordingly, while previous works require both host and hidden signals to hide a signal, PixInWav can encode images offline -- which can be later hidden, in a residual fashion, into any audio signal. Finally, we test our scheme in a lab setting to transmit images over airwaves from a loudspeaker to a microphone verifying our theoretical insights and obtaining promising results.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا