ترغب بنشر مسار تعليمي؟ اضغط هنا

Obfuscating Keystroke Time Intervals to Avoid Identification and Impersonation

56   0   0.0 ( 0 )
 نشر من قبل John V Monaco
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

There are numerous opportunities for adversaries to observe user behavior remotely on the web. Additionally, keystroke biometric algorithms have advanced to the point where user identification and soft biometric trait recognition rates are commercially viable. This presents a privacy concern because masking spatial information, such as IP address, is not sufficient as users become more identifiable by their behavior. In this work, the well-known Chaum mix is generalized to a scenario in which users are separated by both space and time with the goal of preventing an observing adversary from identifying or impersonating the user. The criteria of a behavior obfuscation strategy are defined and two strategies are introduced for obfuscating typing behavior. Experimental results are obtained using publicly available keystroke data for three different types of input, including short fixed-text, long fixed-text, and long free-text. Identification accuracy is reduced by 20% with a 25 ms random keystroke delay not noticeable to the user.



قيم البحث

اقرأ أيضاً

56 - Yoo Chung , Dongman Lee 2005
The Echo protocol tries to do secure location verification using physical limits imposed by the speeds of light and sound. While the protocol is able to guarantee that a certain object is within a certain region, it cannot ensure the authenticity of further messages from the object without using cryptography. This paper describes an impersonation attack against the protocol based on this weakness. It also describes a couple of approaches which can be used to defend against the attack.
In this paper we provide evidence of an emerging criminal infrastructure enabling impersonation attacks at scale. Impersonation-as-a-Service (ImpaaS) allows attackers to systematically collect and enforce user profiles (consisting of user credentials , cookies, device and behavioural fingerprints, and other metadata) to circumvent risk-based authentication system and effectively bypass multi-factor authentication mechanisms. We present the ImpaaS model and evaluate its implementation by analysing the operation of a large, invite-only, Russian ImpaaS platform providing user profiles for more than $260000$ Internet users worldwide. Our findings suggest that the ImpaaS model is growing, and provides the mechanisms needed to systematically evade authentication controls across multiple platforms, while providing attackers with a reliable, up-to-date, and semi-automated environment enabling target selection and user impersonation against Internet users as scale.
In the cloud computing era, data privacy is a critical concern. Memory accesses patterns can leak private information. This data leak is particularly challenging for deep learning recommendation models, where data associated with a user is used to tr ain a model. Recommendation models use embedding tables to map categorical data (embedding table indices) to large vector space, which is easier for recommendation systems to learn. Oblivious RAM (ORAM) and its enhancements are proposed solutions to prevent memory access patterns from leaking information. ORAM solutions hide access patterns by fetching multiple data blocks per each demand fetch and then shuffling the location of blocks after each access. In this paper, we propose a new PathORAM architecture designed to protect user input privacy when training recommendation models. Look Ahead ORAM exploits the fact that during training, embedding table indices that are going to be accessed in a future batch are known beforehand. Look Ahead ORAM preprocesses future training samples to identify indices that will co-occur and groups these accesses into a large superblock. Look Ahead ORAM performs the same-path assignment by grouping multiple data blocks into superblocks. Accessing a superblock will require fewer fetched data blocks than accessing all data blocks without grouping them as superblocks. Effectively, Look Ahead ORAM reduces the number of reads/writes per access. Look Ahead ORAM also introduces a fat-tree structure for PathORAM, i.e. a tree with variable bucket size. Look Ahead ORAM achieves 2x speedup compared to PathORAM and reduces the bandwidth requirement by 3.15x while providing the same security as PathORAM.
With the rapid advancement of technology, different biometric user authentication, and identification systems are emerging. Traditional biometric systems like face, fingerprint, and iris recognition, keystroke dynamics, etc. are prone to cyber-attack s and suffer from different disadvantages. Electroencephalography (EEG) based authentication has shown promise in overcoming these limitations. However, EEG-based authentication is less accurate due to signal variability at different psychological and physiological conditions. On the other hand, keystroke dynamics-based identification offers high accuracy but suffers from different spoofing attacks. To overcome these challenges, we propose a novel multimodal biometric system combining EEG and keystroke dynamics. Firstly, a dataset was created by acquiring both keystroke dynamics and EEG signals from 10 users with 500 trials per user at 10 different sessions. Different statistical, time, and frequency domain features were extracted and ranked from the EEG signals and key features were extracted from the keystroke dynamics. Different classifiers were trained, validated, and tested for both individual and combined modalities for two different classification strategies - personalized and generalized. Results show that very high accuracy can be achieved both in generalized and personalized cases for the combination of EEG and keystroke dynamics. The identification and authentication accuracies were found to be 99.80% and 99.68% for Extreme Gradient Boosting (XGBoost) and Random Forest classifiers, respectively which outperform the individual modalities with a significant margin (around 5 percent). We also developed a binary template matching-based algorithm, which gives 93.64% accuracy 6X faster. The proposed method is secured and reliable for any kind of biometric authentication.
80 - Ang Li , Jiayi Guo , Huanrui Yang 2019
Deep learning has been widely applied in many computer vision applications, with remarkable success. However, running deep learning models on mobile devices is generally challenging due to the limitation of computing resources. A popular alternative is to use cloud services to run deep learning models to process raw data. This, however, imposes privacy risks. Some prior arts proposed sending the features extracted from raw data to the cloud. Unfortunately, these extracted features can still be exploited by attackers to recover raw images and to infer embedded private attributes. In this paper, we propose an adversarial training framework, DeepObfuscator, which prevents the usage of the features for reconstruction of the raw images and inference of private attributes. This is done while retaining useful information for the intended cloud service. DeepObfuscator includes a learnable obfuscator that is designed to hide privacy-related sensitive information from the features by performing our proposed adversarial training algorithm. The proposed algorithm is designed by simulating the game between an attacker who makes efforts to reconstruct raw image and infer private attributes from the extracted features and a defender who aims to protect user privacy. By deploying the trained obfuscator on the smartphone, features can be locally extracted and then sent to the cloud. Our experiments on CelebA and LFW datasets show that the quality of the reconstructed images from the obfuscated features of the raw image is dramatically decreased from 0.9458 to 0.3175 in terms of multi-scale structural similarity. The person in the reconstructed image, hence, becomes hardly to be re-identified. The classification accuracy of the inferred private attributes that can be achieved by the attacker is significantly reduced to a random-guessing level.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا