ترغب بنشر مسار تعليمي؟ اضغط هنا

FakePolisher: Making DeepFakes More Detection-Evasive by Shallow Reconstruction

92   0   0.0 ( 0 )
 نشر من قبل Yihao Huang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

At this moment, GAN-based image generation methods are still imperfect, whose upsampling design has limitations in leaving some certain artifact patterns in the synthesized image. Such artifact patterns can be easily exploited (by recent methods) for difference detection of real and GAN-synthesized images. However, the existing detection methods put much emphasis on the artifact patterns, which can become futile if such artifact patterns were reduced. Towards reducing the artifacts in the synthesized images, in this paper, we devise a simple yet powerful approach termed FakePolisher that performs shallow reconstruction of fake images through a learned linear dictionary, intending to effectively and efficiently reduce the artifacts introduced during image synthesis. The comprehensive evaluation on 3 state-of-the-art DeepFake detection methods and fake images generated by 16 popular GAN-based fake image generation techniques, demonstrates the effectiveness of our technique.Overall, through reducing artifact patterns, our technique significantly reduces the accuracy of the 3 state-of-the-art fake image detection methods, i.e., 47% on average and up to 93% in the worst case.



قيم البحث

اقرأ أيضاً

The novelty and creativity of DeepFake generation techniques have attracted worldwide media attention. Many researchers focus on detecting fake images produced by these GAN-based image generation methods with fruitful results, indicating that the GAN -based image generation methods are not yet perfect. Many studies show that the upsampling procedure used in the decoder of GAN-based image generation methods inevitably introduce artifact patterns into fake images. In order to further improve the fidelity of DeepFake images, in this work, we propose a simple yet powerful framework to reduce the artifact patterns of fake images without hurting image quality. The method is based on an important observation that adding noise to a fake image can successfully reduce the artifact patterns in both spatial and frequency domains. Thus we use a combination of additive noise and deep image filtering to reconstruct the fake images, and we name our method FakeRetouch. The deep image filtering provides a specialized filter for each pixel in the noisy image, taking full advantages of deep learning. The deeply filtered images retain very high fidelity to their DeepFake counterparts. Moreover, we use the semantic information of the image to generate an adversarial guidance map to add noise intelligently. Our method aims at improving the fidelity of DeepFake images and exposing the problems of existing DeepFake detection methods, and we hope that the found vulnerabilities can help improve the future generation DeepFake detection methods.
Recent advances in autoencoders and generative models have given rise to effective video forgery methods, used for generating so-called deepfakes. Mitigation research is mostly focused on post-factum deepfake detection and not on prevention. We compl ement these efforts by introducing a novel class of adversarial attacks---training-resistant attacks---which can disrupt face-swapping autoencoders whether or not its adversarial images have been included in the training set of said autoencoders. We propose the Oscillating GAN (OGAN) attack, a novel attack optimized to be training-resistant, which introduces spatial-temporal distortions to the output of face-swapping autoencoders. To implement OGAN, we construct a bilevel optimization problem, where we train a generator and a face-swapping model instance against each other. Specifically, we pair each input image with a target distortion, and feed them into a generator that produces an adversarial image. This image will exhibit the distortion when a face-swapping autoencoder is applied to it. We solve the optimization problem by training the generator and the face-swapping model simultaneously using an iterative process of alternating optimization. Next, we analyze the previously published Distorting Attack and show it is training-resistant, though it is outperformed by our suggested OGAN. Finally, we validate both attacks using a popular implementation of FaceSwap, and show that they transfer across different target models and target faces, including faces the adversarial attacks were not trained on. More broadly, these results demonstrate the existence of training-resistant adversarial attacks, potentially applicable to a wide range of domains.
244 - Yisroel Mirsky , Wenke Lee 2020
Generative deep learning algorithms have progressed to a point where it is difficult to tell the difference between what is real and what is fake. In 2018, it was discovered how easy it is to use this technology for unethical and malicious applicatio ns, such as the spread of misinformation, impersonation of political leaders, and the defamation of innocent individuals. Since then, these `deepfakes have advanced significantly. In this paper, we explore the creation and detection of deepfakes and provide an in-depth view of how these architectures work. The purpose of this survey is to provide the reader with a deeper understanding of (1) how deepfakes are created and detected, (2) the current trends and advancements in this domain, (3) the shortcomings of the current defense solutions, and (4) the areas which require further research and attention.
Image forgery detection is the task of detecting and localizing forged parts in tampered images. Previous works mostly focus on high resolution images using traces of resampling features, demosaicing features or sharpness of edges. However, a good de tection method should also be applicable to low resolution images because compressed or resized images are common these days. To this end, we propose a Shallow Convolutional Neural Network(SCNN), capable of distinguishing the boundaries of forged regions from original edges in low resolution images. SCNN is designed to utilize the information of chroma and saturation. Based on SCNN, two approaches that are named Sliding Windows Detection (SWD) and Fast SCNN, respectively, are developed to detect and localize image forgery region. In this paper, we substantiate that Fast SCNN can detect drastic change of chroma and saturation. In image forgery detection experiments Our model is evaluated on the CASIA 2.0 dataset. The results show that Fast SCNN performs well on low resolution images and achieves significant improvements over the state-of-the-art.
Progress in generative modelling, especially generative adversarial networks, have made it possible to efficiently synthesize and alter media at scale. Malicious individuals now rely on these machine-generated media, or deepfakes, to manipulate socia l discourse. In order to ensure media authenticity, existing research is focused on deepfake detection. Yet, the adversarial nature of frameworks used for generative modeling suggests that progress towards detecting deepfakes will enable more realistic deepfake generation. Therefore, it comes at no surprise that developers of generative models are under the scrutiny of stakeholders dealing with misinformation campaigns. At the same time, generative models have a lot of positive applications. As such, there is a clear need to develop tools that ensure the transparent use of generative modeling, while minimizing the harm caused by malicious applications. Our technique optimizes over the source of entropy of each generative model to probabilistically attribute a deepfake to one of the models. We evaluate our method on the seminal example of face synthesis, demonstrating that our approach achieves 97.62% attribution accuracy, and is less sensitive to perturbations and adversarial examples. We discuss the ethical implications of our work, identify where our technique can be used, and highlight that a more meaningful legislative framework is required for a more transparent and ethical use of generative modeling. Finally, we argue that model developers should be capable of claiming plausible deniability and propose a second framework to do so -- this allows a model developer to produce evidence that they did not produce media that they are being accused of having produced.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا