ترغب بنشر مسار تعليمي؟ اضغط هنا

Dissecting Click Fraud Autonomy in the Wild

443   0   0.0 ( 0 )
 نشر من قبل Tong Zhu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Although the use of pay-per-click mechanisms stimulates the prosperity of the mobile advertisement network, fraudulent ad clicks result in huge financial losses for advertisers. Extensive studies identify click fraud according to click/traffic patterns based on dynamic analysis. However, in this study, we identify a novel click fraud, named humanoid attack, which can circumvent existing detection schemes by generating fraudulent clicks with similar patterns to normal clicks. We implement the first tool ClickScanner to detect humanoid attacks on Android apps based on static analysis and variational AutoEncoder (VAE) with limited knowledge of fraudulent examples. We define novel features to characterize the patterns of humanoid attacks in the apps bytecode level. ClickScanner builds a data dependency graph (DDG) based on static analysis to extract these key features and form a feature vector. We then propose a classification model only trained on benign datasets to overcome the limited knowledge of humanoid attacks. We leverage ClickScanner to conduct the first large-scale measurement on app markets (i.e.,120,000 apps from Google Play and Huawei AppGallery) and reveal several unprecedented phenomena. First, even for the top-rated 20,000 apps, ClickScanner still identifies 157 apps as fraudulent, which shows the prevalence of humanoid attacks. Second, it is observed that the ad SDK-based attack (i.e., the fraudulent codes are in the third-party ad SDKs) is now a dominant attack approach. Third, the manner of attack is notably different across apps of various categories and popularities. Finally, we notice there are several existing variants of the humanoid attack. Additionally, our measurements demonstrate the proposed ClickScanner is accurate and time-efficient (i.e., the detection overhead is only 15.35% of those of existing schemes).

قيم البحث

اقرأ أيضاً

106 - Zainul Abi Din 2021
App builders commonly use security challenges, a form of step-up authentication, to add security to their apps. However, the ethical implications of this type of architecture has not been studied previously. In this paper, we present a large-scale me asurement study of running an existing anti-fraud security challenge, Boxer, in real apps running on mobile devices. We find that although Boxer does work well overall, it is unable to scan effectively on devices that run its machine learning models at less than one frame per second (FPS), blocking users who use inexpensive devices. With the insights from our study, we design Daredevil, anew anti-fraud system for scanning payment cards that work swell across the broad range of performance characteristics and hardware configurations found on modern mobile devices. Daredevil reduces the number of devices that run at less than one FPS by an order of magnitude compared to Boxer, providing a more equitable system for fighting fraud. In total, we collect data from 5,085,444 real devices spread across 496 real apps running production software and interacting with real users.
87 - Mingxi Wu , Xi Chen 2021
Modern fraudsters write malicious programs to coordinate a group of accounts to commit collective fraud for illegal profits in online platforms. These programs have access to a set of finite resources - a set of IPs, devices, and accounts etc. and so metime manipulate fake accounts to collaboratively attack the target system. Inspired by these observations, we share our experience in building two real-time risk control systems to detect collective fraud. We show that with TigerGraph, a powerful graph database, and its innovative query language - GSQL, data scientists and fraud experts can conveniently implement and deploy an end-to-end risk control system as a graph database application.
Download fraud is a prevalent threat in mobile App markets, where fraudsters manipulate the number of downloads of Apps via various cheating approaches. Purchased fake downloads can mislead recommendation and search algorithms and further lead to bad user experience in App markets. In this paper, we investigate download fraud problem based on a companys App Market, which is one of the most popular Android App markets. We release a honeypot App on the App Market and purchase fake downloads from fraudster agents to track fraud activities in the wild. Based on our interaction with the fraudsters, we categorize download fraud activities into three types according to their intentions: boosting front end downloads, optimizing App search ranking, and enhancing user acquisition&retention rate. For the download fraud aimed at optimizing App search ranking, we select, evaluate, and validate several features in identifying fake downloads based on billions of download data. To get a comprehensive understanding of download fraud, we further gather stances of App marketers, fraudster agencies, and market operators on download fraud. The followed analysis and suggestions shed light on the ways to mitigate download fraud in App markets and other social platforms. To the best of our knowledge, this is the first work that investigates the download fraud problem in mobile App markets.
AI-manipulated videos, commonly known as deepfakes, are an emerging problem. Recently, researchers in academia and industry have contributed several (self-created) benchmark deepfake datasets, and deepfake detection algorithms. However, little effort has gone towards understanding deepfake videos in the wild, leading to a limited understanding of the real-world applicability of research contributions in this space. Even if detection schemes are shown to perform well on existing datasets, it is unclear how well the methods generalize to real-world deepfakes. To bridge this gap in knowledge, we make the following contributions: First, we collect and present the largest dataset of deepfake videos in the wild, containing 1,869 videos from YouTube and Bilibili, and extract over 4.8M frames of content. Second, we present a comprehensive analysis of the growth patterns, popularity, creators, manipulation strategies, and production methods of deepfake content in the real-world. Third, we systematically evaluate existing defenses using our new dataset, and observe that they are not ready for deployment in the real-world. Fourth, we explore the potential for transfer learning schemes and competition-winning techniques to improve defenses.
98 - Shuhan Yuan , Xintao Wu , Jun Li 2017
In this paper, we focus on fraud detection on a signed graph with only a small set of labeled training data. We propose a novel framework that combines deep neural networks and spectral graph analysis. In particular, we use the node projection (calle d as spectral coordinate) in the low dimensional spectral space of the graphs adjacency matrix as input of deep neural networks. Spectral coordinates in the spectral space capture the most useful topology information of the network. Due to the small dimension of spectral coordinates (compared with the dimension of the adjacency matrix derived from a graph), training deep neural networks becomes feasible. We develop and evaluate two neural networks, deep autoencoder and convolutional neural network, in our fraud detection framework. Experimental results on a real signed graph show that our spectrum based deep neural networks are effective in fraud detection.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا