ترغب بنشر مسار تعليمي؟ اضغط هنا

120 - Biao Yi , Hanzhou Wu , Guorui Feng 2021
Recent advances in linguistic steganalysis have successively applied CNNs, RNNs, GNNs and other deep learning models for detecting secret information in generative texts. These methods tend to seek stronger feature extractors to achieve higher stegan alysis effects. However, we have found through experiments that there actually exists significant difference between automatically generated steganographic texts and carrier texts in terms of the conditional probability distribution of individual words. Such kind of statistical difference can be naturally captured by the language model used for generating steganographic texts, which drives us to give the classifier a priori knowledge of the language model to enhance the steganalysis ability. To this end, we present two methods to efficient linguistic steganalysis in this paper. One is to pre-train a language model based on RNN, and the other is to pre-train a sequence autoencoder. Experimental results show that the two methods have different degrees of performance improvement when compared to the randomly initialized RNN classifier, and the convergence speed is significantly accelerated. Moreover, our methods have achieved the best detection results.
In order to protect the intellectual property (IP) of deep neural networks (DNNs), many existing DNN watermarking techniques either embed watermarks directly into the DNN parameters or insert backdoor watermarks by fine-tuning the DNN parameters, whi ch, however, cannot resist against various attack methods that remove watermarks by altering DNN parameters. In this paper, we bypass such attacks by introducing a structural watermarking scheme that utilizes channel pruning to embed the watermark into the host DNN architecture instead of crafting the DNN parameters. To be specific, during watermark embedding, we prune the internal channels of the host DNN with the channel pruning rates controlled by the watermark. During watermark extraction, the watermark is retrieved by identifying the channel pruning rates from the architecture of the target DNN model. Due to the superiority of pruning mechanism, the performance of the DNN model on its original task is reserved during watermark embedding. Experimental results have shown that, the proposed work enables the embedded watermark to be reliably recovered and provides a high watermark capacity, without sacrificing the usability of the DNN model. It is also demonstrated that the work is robust against common transforms and attacks designed for conventional watermarking approaches.
Data hiding is referred to as the art of hiding secret data into a digital cover for covert communication. In this letter, we propose a novel method to disguise data hiding tools, including a data embedding tool and a data extraction tool, as a deep neural network (DNN) with an ordinary task. After training a DNN for both style transfer and data hiding, while the DNN can transfer the style of an image to a target one, it can be also used to hide secret data into a cover image or extract secret data from a stego image by inputting the trigger signal. In other words, the tools of data hiding are hidden to avoid arousing suspicion.
Deep Convolutional Neural Networks (DCNNs) are capable of obtaining powerful image representations, which have attracted great attentions in image recognition. However, they are limited in modeling orientation transformation by the internal mechanism . In this paper, we develop Orientation Convolution Networks (OCNs) for image recognition based on the proposed Landmark Gabor Filters (LGFs) that the robustness of the learned representation against changed of orientation can be enhanced. By modulating the convolutional filter with LGFs, OCNs can be compatible with any existing deep learning networks. LGFs act as a Gabor filter bank achieved by selecting $ p $ $ left( ll nright) $ representative Gabor filters as andmarks and express the original Gabor filters as sparse linear combinations of these landmarks. Specifically, based on a matrix factorization framework, a flexible integration for the local and the global structure of original Gabor filters by sparsity and low-rank constraints is utilized. With the propogation of the low-rank structure, the corresponding sparsity for representation of original Gabor filter bank can be significantly promoted. Experimental results over several benchmarks demonstrate that our method is less sensitive to the orientation and produce higher performance both in accuracy and cost, compared with the existing state-of-art methods. Besides, our OCNs have few parameters to learn and can significantly reduce the complexity of training network.
Many learning tasks require us to deal with graph data which contains rich relational information among elements, leading increasing graph neural network (GNN) models to be deployed in industrial products for improving the quality of service. However , they also raise challenges to model authentication. It is necessary to protect the ownership of the GNN models, which motivates us to present a watermarking method to GNN models in this paper. In the proposed method, an Erdos-Renyi (ER) random graph with random node feature vectors and labels is randomly generated as a trigger to train the GNN to be protected together with the normal samples. During model training, the secret watermark is embedded into the label predictions of the ER graph nodes. During model verification, by activating a marked GNN with the trigger ER graph, the watermark can be reconstructed from the output to verify the ownership. Since the ER graph was randomly generated, by feeding it to a non-marked GNN, the label predictions of the graph nodes are random, resulting in a low false alarm rate (of the proposed work). Experimental results have also shown that, the performance of a marked GNN on its original task will not be impaired. Moreover, it is robust against model compression and fine-tuning, which has shown the superiority and applicability.
124 - Hanzhou Wu , Xinpeng Zhang 2019
While many games were designed for steganography and robust watermarking, few focused on reversible watermarking. We present a two-encoder game related to the rate-distortion optimization of content-adaptive reversible watermarking. In the game, Alic e first hides a payload into a cover. Then, Bob hides another payload into the modified cover. The embedding strategy of Alice affects the embedding capacity of Bob. The embedding strategy of Bob may produce data-extraction errors to Alice. Both want to embed as many pure secret bits as possible, subjected to an upper-bounded distortion. We investigate non-cooperative game and cooperative game between Alice and Bob. When they cooperate with each other, one may consider them as a whole, i.e., an encoder uses a cover for data embedding with two times. When they do not cooperate with each other, the game corresponds to a separable system, i.e., both want to independently hide a payload within the cover, but recovering the cover may need cooperation. We find equilibrium strategies for both players under constraints.
45 - Hanzhou Wu 2019
Conventional steganalysis detects the presence of steganography within single objects. In the real-world, we may face a complex scenario that one or some of multiple users called actors are guilty of using steganography, which is typically defined as the Steganographer Identification Problem (SIP). One might use the conventional steganalysis algorithms to separate stego objects from cover objects and then identify the guilty actors. However, the guilty actors may be lost due to a number of false alarms. To deal with the SIP, most of the state-of-the-arts use unsupervised learning based approaches. In their solutions, each actor holds multiple digital objects, from which a set of feature vectors can be extracted. The well-defined distances between these feature sets are determined to measure the similarity between the corresponding actors. By applying clustering or outlier detection, the most suspicious actor(s) will be judged as the steganographer(s). Though the SIP needs further study, the existing works have good ability to identify the steganographer(s) when non-adaptive steganographic embedding was applied. In this chapter, we will present foundational concepts and review advanced methodologies in SIP. This chapter is self-contained and intended as a tutorial introducing the SIP in the context of media steganography.
104 - Hanzhou Wu 2018
Traditional steganalysis algorithms focus on detecting the existence of steganography in a single object. In practice, one may face a complex scenario where one or some of multiple users also called actors are guilty of using steganography, which is defined as the steganographer identification problem (SIP). This requires steganalysis experts to design effective and robust detection algorithms to identify the guilty actor(s). The mainstream works use clustering, ensemble and anomaly detection, where distances in high dimensional space between features of actors are determined to find out the outlier(s) corresponding to steganographer(s). However, in high dimensional space, feature points could be sparse such that distances between feature points may become relatively similar to each other, which cannot benefit the detection. Moreover, it is well-known in machine learning that combining techniques such as boosting and bagging can be effective in improving detection performance. This motivates the authors in this paper to present a feature bagging approach to SIP. The proposed work merges results from multiple detection sub-models, each of which feature space is randomly sampled from the raw full dimensional space. We create a new dataset called ImgNetEase including 5108 images downloaded from a social website to mimic the real-world scenario. We extract PEV-274 features from images, and take nsF5 as the steganographic algorithm for evaluation. Experiments have shown that our work improves the detection accuracy significantly on created dataset in most cases, which has shown the superiority and applicability.
H.264/Advanced Video Coding (AVC) is one of the most commonly used video compression standard currently. In this paper, we propose a Reversible Data Hiding (RDH) method based on H.264/AVC videos. In the proposed method, the macroblocks with intra-fra me $4times 4$ prediction modes in intra frames are first selected as embeddable blocks. Then, the last zero Quantized Discrete Cosine Transform (QDCT) coefficients in all $4times 4$ blocks of the embeddable macroblocks are paired. In the following, a modification mapping rule based on making full use of modification directions are given. Finally, each zero coefficient-pair is changed by combining the given mapping rule with the to-be-embedded information bits. Since most of last QDCT coefficients in all $4times 4$ blocks are zero and they are located in high frequency area. Therefore, the proposed method can obtain high embedding capacity and low distortion.
Data in mobile cloud environment are mainly transmitted via wireless noisy channels, which may result in transmission errors with a high probability due to its unreliable connectivity. For video transmission, unreliable connectivity may cause signifi cant degradation of the content. Improving or keeping video quality over lossy channel is therefore a very important research topic. Error concealment with data hiding (ECDH) is an effective way to conceal the errors introduced by channels. It can reduce error propagation between neighbor blocks/frames comparing with the methods exploiting temporal/spatial correlations. The existing video ECDH methods often embed the motion vectors (MVs) into the specific locations. Nevertheless, specific embedding locations cannot resist against random errors. To compensate the unreliable connectivity in mobile cloud environment, in this paper, we present a video ECDH scheme using 3D reversible data hiding (RDH), in which each MV is repeated multiple times, and the repeated MVs are embedded into different macroblocks (MBs) randomly. Though the multiple embedding requires more embedding space, satisfactory trade-off between the introduced distortion and the reconstructed video quality can be achieved by tuning the repeating times of the MVs. For random embedding, the lost probability of the MVs decreases rapidly, resulting in better error concealment performance. Experimental results show that the PSNR values gain about 5dB at least comparing with the existing ECDH methods. Meanwhile, the proposed method improves the video quality significantly.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا