ترغب بنشر مسار تعليمي؟ اضغط هنا

TS-RNN: Text Steganalysis Based on Recurrent Neural Networks

189   0   0.0 ( 0 )
 نشر من قبل Zhongliang Yang
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

With the rapid development of natural language processing technologies, more and more text steganographic methods based on automatic text generation technology have appeared in recent years. These models use the powerful self-learning and feature extraction ability of the neural networks to learn the feature expression of massive normal texts. Then they can automatically generate dense steganographic texts which conform to such statistical distribution based on the learned statistical patterns. In this paper, we observe that the conditional probability distribution of each word in the automatically generated steganographic texts will be distorted after embedded with hidden information. We use Recurrent Neural Networks (RNNs) to extract these feature distribution differences and then classify those features into cover text and stego text categories. Experimental results show that the proposed model can achieve high detection accuracy. Besides, the proposed model can even make use of the subtle differences of the feature distribution of texts to estimate the amount of hidden information embedded in the generated steganographic text.



قيم البحث

اقرأ أيضاً

Steganalysis has been an important research topic in cybersecurity that helps to identify covert attacks in public network. With the rapid development of natural language processing technology in the past two years, coverless steganography has been g reatly developed. Previous text steganalysis methods have shown unsatisfactory results on this new steganography technique and remain an unsolved challenge. Different from all previous text steganalysis methods, in this paper, we propose a text steganalysis method(TS-CNN) based on semantic analysis, which uses convolutional neural network(CNN) to extract high-level semantic features of texts, and finds the subtle distribution differences in the semantic space before and after embedding the secret information. To train and test the proposed model, we collected and released a large text steganalysis(CT-Steg) dataset, which contains a total number of 216,000 texts with various lengths and various embedding rates. Experimental results show that the proposed model can achieve nearly 100% precision and recall, outperforms all the previous methods. Furthermore, the proposed model can even estimate the capacity of the hidden information inside. These results strongly support that using the subtle changes in the semantic space before and after embedding the secret information to conduct text steganalysis is feasible and effective.
Steganalysis means analysis of stego images. Like cryptanalysis, steganalysis is used to detect messages often encrypted using secret key from stego images produced by steganography techniques. Recently lots of new and improved steganography techniqu es are developed and proposed by researchers which require robust steganalysis techniques to detect the stego images having minimum false alarm rate. This paper discusses about the different Steganalysis techniques and help to understand how, where and when this techniques can be used based on different situations.
Recently, the application of deep learning in steganalysis has drawn many researchers attention. Most of the proposed steganalytic deep learning models are derived from neural networks applied in computer vision. These kinds of neural networks have d istinguished performance. However, all these kinds of back-propagation based neural networks may be cheated by forging input named the adversarial example. In this paper we propose a method to generate steganographic adversarial example in order to enhance the steganographic security of existing algorithms. These adversarial examples can increase the detection error of steganalytic CNN. The experiments prove the effectiveness of the proposed method.
Recurrent neural networks (RNNs) are capable of modeling temporal dependencies of complex sequential data. In general, current available structures of RNNs tend to concentrate on controlling the contributions of current and previous information. Howe ver, the exploration of different importance levels of different elements within an input vector is always ignored. We propose a simple yet effective Element-wise-Attention Gate (EleAttG), which can be easily added to an RNN block (e.g. all RNN neurons in an RNN layer), to empower the RNN neurons to have attentiveness capability. For an RNN block, an EleAttG is used for adaptively modulating the input by assigning different levels of importance, i.e., attention, to each element/dimension of the input. We refer to an RNN block equipped with an EleAttG as an EleAtt-RNN block. Instead of modulating the input as a whole, the EleAttG modulates the input at fine granularity, i.e., element-wise, and the modulation is content adaptive. The proposed EleAttG, as an additional fundamental unit, is general and can be applied to any RNN structures, e.g., standard RNN, Long Short-Term Memory (LSTM), or Gated Recurrent Unit (GRU). We demonstrate the effectiveness of the proposed EleAtt-RNN by applying it to different tasks including the action recognition, from both skeleton-based data and RGB videos, gesture recognition, and sequential MNIST classification. Experiments show that adding attentiveness through EleAttGs to RNN blocks significantly improves the power of RNNs.
161 - Songtao Wu , Sheng-hua Zhong , 2017
Deep learning based image steganalysis has attracted increasing attentions in recent years. Several Convolutional Neural Network (CNN) models have been proposed and achieved state-of-the-art performances on detecting steganography. In this paper, we explore an important technique in deep learning, the batch normalization, for the task of image steganalysis. Different from natural image classification, steganalysis is to discriminate cover images and stego images which are the result of adding weak stego signals into covers. This characteristic makes a cover image is more statistically similar to its stego than other cover images, requiring steganalytic methods to use paired learning to extract effective features for image steganalysis. Our theoretical analysis shows that a CNN model with multiple normalization layers is hard to be generalized to new data in the test set when it is well trained with paired learning. To hand this difficulty, we propose a novel normalization technique called Shared Normalization (SN) in this paper. Unlike the batch normalization layer utilizing the mini-batch mean and standard deviation to normalize each input batch, SN shares same statistics for all training and test batches. Based on the proposed SN layer, we further propose a novel neural network model for image steganalysis. Extensive experiments demonstrate that the proposed network with SN layers is stable and can detect the state of the art steganography with better performances than previous methods.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا