No Arabic abstract
A reputable social media or review account can be a good cover for spamming activities. It has become prevalent that spammers buy/sell such accounts openly on the Web. We call these sold/bought accounts the changed-hands (CH) accounts. They are hard to detect by existing spam detection algorithms as their spamming activities are under the disguise of clean histories. In this paper, we first propose the problem of detecting CH accounts, and then design an effective detection algorithm which exploits changes in content and writing styles of individual accounts, and a proposed novel feature selection method that works at a fine-grained level within each individual account. The proposed method not only determines if an account has changed hands, but also pinpoints the change point. Experimental results with online review accounts demonstrate the high effectiveness of our approach.
The widespread of Online Social Networks and the opportunity to commercialize popular accounts have attracted a large number of automated programs, known as artificial accounts. This paper focuses on the classification of human and fake accounts on the social network, by employing several graph neural networks, to efficiently encode attributes and network graph features of the account. Our work uses both network structure and attributes to distinguish human and artificial accounts and compares attributed and traditional graph embeddings. Separating complex, human-like artificial accounts into a standalone task demonstrates significant limitations of profile-based algorithms for bot detection and shows the efficiency of network structure-based methods for detecting sophisticated bot accounts. Experiments show that our approach can achieve competitive performance compared with existing state-of-the-art bot detection systems with only network-driven features. The source code of this paper is available at: http://github.com/karpovilia/botdetection.
Most of the online news media outlets rely heavily on the revenues generated from the clicks made by their readers, and due to the presence of numerous such outlets, they need to compete with each other for reader attention. To attract the readers to click on an article and subsequently visit the media site, the outlets often come up with catchy headlines accompanying the article links, which lure the readers to click on the link. Such headlines are known as Clickbaits. While these baits may trick the readers into clicking, in the long run, clickbaits usually dont live up to the expectation of the readers, and leave them disappointed. In this work, we attempt to automatically detect clickbaits and then build a browser extension which warns the readers of different media sites about the possibility of being baited by such headlines. The extension also offers each reader an option to block clickbaits she doesnt want to see. Then, using such reader choices, the extension automatically blocks similar clickbaits during her future visits. We run extensive offline and online experiments across multiple media sites and find that the proposed clickbait detection and the personalized blocking approaches perform very well achieving 93% accuracy in detecting and 89% accuracy in blocking clickbaits.
Different types of malicious activities have been flagged in multiple permissionless blockchains such as bitcoin, Ethereum etc. While some malicious activities exploit vulnerabilities in the infrastructure of the blockchain, some target its users through social engineering techniques. To address these problems, we aim at automatically flagging blockchain accounts that originate such malicious exploitation of accounts of other participants. To that end, we identify a robust supervised machine learning (ML) algorithm that is resistant to any bias induced by an over representation of certain malicious activity in the available dataset, as well as is robust against adversarial attacks. We find that most of the malicious activities reported thus far, for example, in Ethereum blockchain ecosystem, behaves statistically similar. Further, the previously used ML algorithms for identifying malicious accounts show bias towards a particular malicious activity which is over-represented. In the sequel, we identify that Neural Networks (NN) holds up the best in the face of such bias inducing dataset at the same time being robust against certain adversarial attacks.
We investigate a new problem of detecting hands and recognizing their physical contact state in unconstrained conditions. This is a challenging inference task given the need to reason beyond the local appearance of hands. The lack of training annotations indicating which object or parts of an object the hand is in contact with further complicates the task. We propose a novel convolutional network based on Mask-RCNN that can jointly learn to localize hands and predict their physical contact to address this problem. The network uses outputs from another object detector to obtain locations of objects present in the scene. It uses these outputs and hand locations to recognize the hands contact state using two attention mechanisms. The first attention mechanism is based on the hand and a regions affinity, enclosing the hand and the object, and densely pools features from this region to the hand region. The second attention module adaptively selects salient features from this plausible region of contact. To develop and evaluate our methods performance, we introduce a large-scale dataset called ContactHands, containing unconstrained images annotated with hand locations and contact states. The proposed network, including the parameters of attention modules, is end-to-end trainable. This network achieves approximately 7% relative improvement over a baseline network that was built on the vanilla Mask-RCNN architecture and trained for recognizing hand contact states.
In social networks, a single user may create multiple accounts to spread his / her opinions and to influence others, by actively comment on different news pages. It would be beneficial to both social networks and their communities, to demote such abnormal activities, and the first step is to detect those accounts. However, the detection is challenging, because these accounts may have very realistic names and reasonable activity patterns. In this paper, we investigate three different approaches, and propose using graph embedding together with semi-supervised learning, to predict whether a pair of accounts are created by the same user. We carry out extensive experimental analyses to understand how changes in the input data and algorithmic parameters / optimization affect the prediction performance. We also discover that local information have higher importance than the global ones for such prediction, and point out the threshold leading to the best results. We test the proposed approach with 6700 Facebook pages from the Middle East, and achieve the averaged accuracy at 0.996 and AUC (area under curve) at 0.952 for users with the same name; with the U.S. 2016 election dataset, we obtain the best AUC at 0.877 for users with different names.