ترغب بنشر مسار تعليمي؟ اضغط هنا

Self-Adversarial Training incorporating Forgery Attention for Image Forgery Localization

109   0   0.0 ( 0 )
 نشر من قبل Shunquan Tan
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Image editing techniques enable people to modify the content of an image without leaving visual traces and thus may cause serious security risks. Hence the detection and localization of these forgeries become quite necessary and challenging. Furthermore, unlike other tasks with extensive data, there is usually a lack of annotated forged images for training due to annotation difficulties. In this paper, we propose a self-adversarial training strategy and a reliable coarse-to-fine network that utilizes a self-attention mechanism to localize forged regions in forgery images. The self-attention module is based on a Channel-Wise High Pass Filter block (CW-HPF). CW-HPF leverages inter-channel relationships of features and extracts noise features by high pass filters. Based on the CW-HPF, a self-attention mechanism, called forgery attention, is proposed to capture rich contextual dependencies of intrinsic inconsistency extracted from tampered regions. Specifically, we append two types of attention modules on top of CW-HPF respectively to model internal interdependencies in spatial dimension and external dependencies among channels. We exploit a coarse-to-fine network to enhance the noise inconsistency between original and tampered regions. More importantly, to address the issue of insufficient training data, we design a self-adversarial training strategy that expands training data dynamically to achieve more robust performance. Specifically, in each training iteration, we perform adversarial attacks against our network to generate adversarial examples and train our model on them. Extensive experimental results demonstrate that our proposed algorithm steadily outperforms state-of-the-art methods by a clear margin in different benchmark datasets.



قيم البحث

اقرأ أيضاً

Nowadays advanced image editing tools and technical skills produce tampered images more realistically, which can easily evade image forensic systems and make authenticity verification of images more difficult. To tackle this challenging problem, we i ntroduce TransForensics, a novel image forgery localization method inspired by Transformers. The two major components in our framework are dense self-attention encoders and dense correction modules. The former is to model global context and all pairwise interactions between local patches at different scales, while the latter is used for improving the transparency of the hidden layers and correcting the outputs from different branches. Compared to previous traditional and deep learning methods, TransForensics not only can capture discriminative representations and obtain high-quality mask predictions but is also not limited by tampering types and patch sequence orders. By conducting experiments on main benchmarks, we show that TransForensics outperforms the stateof-the-art methods by a large margin.
In this paper, we propose to utilize Convolutional Neural Networks (CNNs) and the segmentation-based multi-scale analysis to locate tampered areas in digital images. First, to deal with color input sliding windows of different scales, a unified CNN a rchitecture is designed. Then, we elaborately design the training procedures of CNNs on sampled training patches. With a set of robust multi-scale tampering detectors based on CNNs, complementary tampering possibility maps can be generated. Last but not least, a segmentation-based method is proposed to fuse the maps and generate the final decision map. By exploiting the benefits of both the small-scale and large-scale analyses, the segmentation-based multi-scale analysis can lead to a performance leap in forgery localization of CNNs. Numerous experiments are conducted to demonstrate the effectiveness and efficiency of our method.
In this paper, a copy-move forgery detection method based on Convolutional Kernel Network is proposed. Different from methods based on conventional hand-crafted features, Convolutional Kernel Network is a kind of data-driven local descriptor with the deep convolutional structure. Thanks to the development of deep learning theories and widely available datasets, the data-driven methods can achieve competitive performance on different conditions for its excellent discriminative capability. Besides, our Convolutional Kernel Network is reformulated as a series of matrix computations and convolutional operations which are easy to parallelize and accelerate by GPU, leading to high efficiency. Then, appropriate preprocessing and postprocessing for Convolutional Kernel Network are adopted to achieve copy-move forgery detection. Particularly, a segmentation-based keypoints distribution strategy is proposed and a GPU-based adaptive oversegmentation method is adopted. Numerous experiments are conducted to demonstrate the effectiveness and robustness of the GPU version of Convolutional Kernel Network, and the state-of-the-art performance of the proposed copy-move forgery detection method based on Convolutional Kernel Network.
71 - Xiuli Bi , Yanbin Liu , Bin Xiao 2020
Recently, many detection methods based on convolutional neural networks (CNNs) have been proposed for image splicing forgery detection. Most of these detection methods focus on the local patches or local objects. In fact, image splicing forgery detec tion is a global binary classification task that distinguishes the tampered and non-tampered regions by image fingerprints. However, some specific image contents are hardly retained by CNN-based detection networks, but if included, would improve the detection accuracy of the networks. To resolve these issues, we propose a novel network called dual-encoder U-Net (D-Unet) for image splicing forgery detection, which employs an unfixed encoder and a fixed encoder. The unfixed encoder autonomously learns the image fingerprints that differentiate between the tampered and non-tampered regions, whereas the fixed encoder intentionally provides the direction information that assists the learning and detection of the network. This dual-encoder is followed by a spatial pyramid global-feature extraction module that expands the global insight of D-Unet for classifying the tampered and non-tampered regions more accurately. In an experimental comparison study of D-Unet and state-of-the-art methods, D-Unet outperformed the other methods in image-level and pixel-level detection, without requiring pre-training or training on a large number of forgery images. Moreover, it was stably robust to different attacks.
Rapid progress in deep learning is continuously making it easier and cheaper to generate video forgeries. Hence, it becomes very important to have a reliable way of detecting these forgeries. This paper describes such an approach for various tamperin g scenarios. The problem is modelled as a per-frame binary classification task. We propose to use transfer learning from face recognition task to improve tampering detection on many different facial manipulation scenarios. Furthermore, in low resolution settings, where single frame detection performs poorly, we try to make use of neighboring frames for middle frame classification. We evaluate both approaches on the public FaceForensics benchmark, achieving state of the art accuracy.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا