Do you want to publish a course? Click here

Impersonation-as-a-Service: Characterizing the Emerging Criminal Infrastructure for User Impersonation at Scale

59   0   0.0 ( 0 )
 Added by Michele Campobasso
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper we provide evidence of an emerging criminal infrastructure enabling impersonation attacks at scale. Impersonation-as-a-Service (ImpaaS) allows attackers to systematically collect and enforce user profiles (consisting of user credentials, cookies, device and behavioural fingerprints, and other metadata) to circumvent risk-based authentication system and effectively bypass multi-factor authentication mechanisms. We present the ImpaaS model and evaluate its implementation by analysing the operation of a large, invite-only, Russian ImpaaS platform providing user profiles for more than $260000$ Internet users worldwide. Our findings suggest that the ImpaaS model is growing, and provides the mechanisms needed to systematically evade authentication controls across multiple platforms, while providing attackers with a reliable, up-to-date, and semi-automated environment enabling target selection and user impersonation against Internet users as scale.



rate research

Read More

56 - Yoo Chung , Dongman Lee 2005
The Echo protocol tries to do secure location verification using physical limits imposed by the speeds of light and sound. While the protocol is able to guarantee that a certain object is within a certain region, it cannot ensure the authenticity of further messages from the object without using cryptography. This paper describes an impersonation attack against the protocol based on this weakness. It also describes a couple of approaches which can be used to defend against the attack.
There are numerous opportunities for adversaries to observe user behavior remotely on the web. Additionally, keystroke biometric algorithms have advanced to the point where user identification and soft biometric trait recognition rates are commercially viable. This presents a privacy concern because masking spatial information, such as IP address, is not sufficient as users become more identifiable by their behavior. In this work, the well-known Chaum mix is generalized to a scenario in which users are separated by both space and time with the goal of preventing an observing adversary from identifying or impersonating the user. The criteria of a behavior obfuscation strategy are defined and two strategies are introduced for obfuscating typing behavior. Experimental results are obtained using publicly available keystroke data for three different types of input, including short fixed-text, long fixed-text, and long free-text. Identification accuracy is reduced by 20% with a 25 ms random keystroke delay not noticeable to the user.
This work considers a line-of-sight underwater acoustic sensor network (UWASN) consisting of $M$ underwater sensor nodes randomly deployed according to uniform distribution within a vertical half-disc (the so-called trusted zone). The sensor nodes report their sensed data to a sink node on water surface on a shared underwater acoustic (UWA) reporting channel in a time-division multiple-access (TDMA) fashion, while an active-yet-invisible adversary (so-called Eve) is present in the close vicinity who aims to inject malicious data into the system by impersonating some Alice node. To this end, this work first considers an additive white Gaussian noise (AWGN) UWA channel, and proposes a novel, multiple-features based, two-step method at the sink node to thwart the potential impersonation attack by Eve. Specifically, the sink node exploits the noisy estimates of the distance, the angle of arrival, and the location of the transmit node as device fingerprints to carry out a number of binary hypothesis tests (for impersonation detection) as well as a number of maximum likelihood hypothesis tests (for transmitter identification when no impersonation is detected). We provide closed-form expressions for the error probabilities (i.e., the performance) of most of the hypothesis tests. We then consider the case of a UWA with colored noise and frequency-dependent pathloss, and derive a maximum-likelihood (ML) distance estimator as well as the corresponding Cramer-Rao bound (CRB). We then invoke the proposed two-step, impersonation detection framework by utilizing distance as the sole feature. Finally, we provide detailed simulation results for both AWGN UWA channel and the UWA channel with colored noise. Simulation results verify that the proposed scheme is indeed effective for a UWA channel with colored noise and frequency-dependent pathloss.
This paper investigates the impact of authentication on effective capacity (EC) of an underwater acoustic (UWA) channel. Specifically, the UWA channel is under impersonation attack by a malicious node (Eve) present in the close vicinity of the legitimate node pair (Alice and Bob); Eve tries to inject its malicious data into the system by making Bob believe that she is indeed Alice. To thwart the impersonation attack by Eve, Bob utilizes the distance of the transmit node as the feature/fingerprint to carry out feature-based authentication at the physical layer. Due to authentication at Bob, due to lack of channel knowledge at the transmit node (Alice or Eve), and due to the threshold-based decoding error model, the relevant dynamics of the considered system could be modelled by a Markov chain (MC). Thus, we compute the state-transition probabilities of the MC, and the moment generating function for the service process corresponding to each state. This enables us to derive a closed-form expression of the EC in terms of authentication parameters. Furthermore, we compute the optimal transmission rate (at Alice) through gradient-descent (GD) technique and artificial neural network (ANN) method. Simulation results show that the EC decreases under severe authentication constraints (i.e., more false alarms and more transmissions by Eve). Simulation results also reveal that the (optimal transmission rate) performance of the ANN technique is quite close to that of the GD method.
We introduce a new attack against face verification systems based on Deep Neural Networks (DNN). The attack relies on the introduction into the network of a hidden backdoor, whose activation at test time induces a verification error allowing the attacker to impersonate any user. The new attack, named Master Key backdoor attack, operates by interfering with the training phase, so to instruct the DNN to always output a positive verification answer when the face of the attacker is presented at its input. With respect to existing attacks, the new backdoor attack offers much more flexibility, since the attacker does not need to know the identity of the victim beforehand. In this way, he can deploy a Universal Impersonation attack in an open-set framework, allowing him to impersonate any enrolled users, even those that were not yet enrolled in the system when the attack was conceived. We present a practical implementation of the attack targeting a Siamese-DNN face verification system, and show its effectiveness when the system is trained on VGGFace2 dataset and tested on LFW and YTF datasets. According to our experiments, the Master Key backdoor attack provides a high attack success rate even when the ratio of poisoned training data is as small as 0.01, thus raising a new alarm regarding the use of DNN-based face verification systems in security-critical applications.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا