Do you want to publish a course? Click here

Securing Face Liveness Detection Using Unforgeable Lip Motion Patterns

73   0   0.0 ( 0 )
 Added by Man Zhou
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Face authentication usually utilizes deep learning models to verify users with high recognition accuracy. However, face authentication systems are vulnerable to various attacks that cheat the models by manipulating the digital counterparts of human faces. So far, lots of liveness detection schemes have been developed to prevent such attacks. Unfortunately, the attacker can still bypass these schemes by constructing wide-ranging sophisticated attacks. We study the security of existing face authentication services (e.g., Microsoft, Amazon, and Face++) and typical liveness detection approaches. Particularly, we develop a new type of attack, i.e., the low-cost 3D projection attack that projects manipulated face videos on a 3D face model, which can easily evade these face authentication services and liveness detection approaches. To this end, we propose FaceLip, a novel liveness detection scheme for face authentication, which utilizes unforgeable lip motion patterns built upon well-designed acoustic signals to enable a strong security guarantee. The unique lip motion patterns for each user are unforgeable because FaceLip verifies the patterns by capturing and analyzing the acoustic signals that are dynamically generated according to random challenges, which ensures that our signals for liveness detection cannot be manipulated. Specially, we develop robust algorithms for FaceLip to eliminate the impact of noisy signals in the environment and thus can accurately infer the lip motions at larger distances. We prototype FaceLip on off-the-shelf smartphones and conduct extensive experiments under different settings. Our evaluation with 44 participants validates the effectiveness and robustness of FaceLip.

rate research

Read More

Due to the rise of Industrial Control Systems (ICSs) cyber-attacks in the recent decade, various security frameworks have been designed for anomaly detection. While advanced ICS attacks use sequential phases to launch their final attacks, existing anomaly detection methods can only monitor a single source of data. Therefore, analysis of multiple security data can provide comprehensive and system-wide anomaly detection in industrial networks. In this paper, we propose an anomaly detection framework for ICSs that consists of two stages: i) blockchain-based log management where the logs of ICS devices are collected in a secure and distributed manner, and ii) multi-source anomaly detection where the blockchain logs are analysed using multi-source deep learning which in turn provides a system wide anomaly detection method. We validated our framework using two ICS datasets: a factory automation dataset and a Secure Water Treatment (SWAT) dataset. These datasets contain physical and network level normal and abnormal traffic. The performance of our new framework is compared with single-source machine learning methods. The precision of our framework is 95% which is comparable with single-source anomaly detectors.
This work shows that it is possible to fool/attack recent state-of-the-art face detectors which are based on the single-stage networks. Successfully attacking face detectors could be a serious malware vulnerability when deploying a smart surveillance system utilizing face detectors. We show that existing adversarial perturbation methods are not effective to perform such an attack, especially when there are multiple faces in the input image. This is because the adversarial perturbation specifically generated for one face may disrupt the adversarial perturbation for another face. In this paper, we call this problem the Instance Perturbation Interference (IPI) problem. This IPI problem is addressed by studying the relationship between the deep neural network receptive field and the adversarial perturbation. As such, we propose the Localized Instance Perturbation (LIP) that uses adversarial perturbation constrained to the Effective Receptive Field (ERF) of a target to perform the attack. Experiment results show the LIP method massively outperforms existing adversarial perturbation generation methods -- often by a factor of 2 to 10.
Face authentication systems are becoming increasingly prevalent, especially with the rapid development of Deep Learning technologies. However, human facial information is easy to be captured and reproduced, which makes face authentication systems vulnerable to various attacks. Liveness detection is an important defense technique to prevent such attacks, but existing solutions did not provide clear and strong security guarantees, especially in terms of time. To overcome these limitations, we propose a new liveness detection protocol called Face Flashing that significantly increases the bar for launching successful attacks on face authentication systems. By randomly flashing well-designed pictures on a screen and analyzing the reflected light, our protocol has leveraged physical characteristics of human faces: reflection processing at the speed of light, unique textual features, and uneven 3D shapes. Cooperating with working mechanism of the screen and digital cameras, our protocol is able to detect subtle traces left by an attacking process. To demonstrate the effectiveness of Face Flashing, we implemented a prototype and performed thorough evaluations with large data set collected from real-world scenarios. The results show that our Timing Verification can effectively detect the time gap between legitimate authentications and malicious cases. Our Face Verification can also differentiate 2D plane from 3D objects accurately. The overall accuracy of our liveness detection system is 98.8%, and its robustness was evaluated in different scenarios. In the worst case, our systems accuracy decreased to a still-high 97.3%.
Fingerprint-based recognition has been widely deployed in various applications. However, current recognition systems are vulnerable to spoofing attacks which make use of an artificial replica of a fingerprint to deceive the sensors. In such scenarios, fingerprint liveness detection ensures the actual presence of a real legitimate fingerprint in contrast to a fake self-manufactured synthetic sample. In this paper, we propose a static software-based approach using quality features to detect the liveness in a fingerprint. We have extracted features from a single fingerprint image to overcome the issues faced in dynamic software-based approaches which require longer computational time and user cooperation. The proposed system extracts 8 sensor independent quality features on a local level containing minute details of the ridge-valley structure of real and fake fingerprints. These local quality features constitutes a 13-dimensional feature vector. The system is tested on a publically available dataset of LivDet 2009 competition. The experimental results exhibit supremacy of the proposed method over current state-of-the-art approaches providing least average classification error of 5.3% for LivDet 2009. Additionally, effectiveness of the best performing features over LivDet 2009 is evaluated on the latest LivDet 2015 dataset which contain fingerprints fabricated using unknown spoof materials. An average classification error rate of 4.22% is achieved in comparison with 4.49% obtained by the LivDet 2015 winner. Further, the proposed system utilizes a single fingerprint image, which results in faster implications and makes it more user-friendly.
Transparent authentication (TA) schemes are those in which a user is authenticated by a verifier without requiring explicit user interaction. By doing so, those schemes promise high usability and security simultaneously. The majority of TA implementations rely on the received signal strength as an indicator for the proximity of a user device (prover). However, such implicit proximity verification is not secure against an adversary who can relay messages over a larger distance. In this paper, we propose a novel approach for thwarting relay attacks in TA schemes: the prover permits access to authentication credentials only if it can confirm that it is near the verifier. We present STASH, a system for relay-resilient transparent authentication in which the prover does proximity verification by comparing its approach trajectory towards the intended verifier with known authorized reference trajectories. Trajectories are measured using low-cost sensors commonly available on personal devices. We demonstrate the security of STASH against a class of adversaries and its ease-of-use by analyzing empirical data, collected using a STASH prototype. STASH is efficient and can be easily integrated to complement existing TA schemes.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا