Do you want to publish a course? Click here

Face Flashing: a Secure Liveness Detection Protocol based on Light Reflections

81   0   0.0 ( 0 )
 Added by Di Tang
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Face authentication systems are becoming increasingly prevalent, especially with the rapid development of Deep Learning technologies. However, human facial information is easy to be captured and reproduced, which makes face authentication systems vulnerable to various attacks. Liveness detection is an important defense technique to prevent such attacks, but existing solutions did not provide clear and strong security guarantees, especially in terms of time. To overcome these limitations, we propose a new liveness detection protocol called Face Flashing that significantly increases the bar for launching successful attacks on face authentication systems. By randomly flashing well-designed pictures on a screen and analyzing the reflected light, our protocol has leveraged physical characteristics of human faces: reflection processing at the speed of light, unique textual features, and uneven 3D shapes. Cooperating with working mechanism of the screen and digital cameras, our protocol is able to detect subtle traces left by an attacking process. To demonstrate the effectiveness of Face Flashing, we implemented a prototype and performed thorough evaluations with large data set collected from real-world scenarios. The results show that our Timing Verification can effectively detect the time gap between legitimate authentications and malicious cases. Our Face Verification can also differentiate 2D plane from 3D objects accurately. The overall accuracy of our liveness detection system is 98.8%, and its robustness was evaluated in different scenarios. In the worst case, our systems accuracy decreased to a still-high 97.3%.

rate research

Read More

Last-generation GAN models allow to generate synthetic images which are visually indistinguishable from natural ones, raising the need to develop tools to distinguish fake and natural images thus contributing to preserve the trustworthiness of digital images. While modern GAN models can generate very high-quality images with no visible spatial artifacts, reconstruction of consistent relationships among colour channels is expectedly more difficult. In this paper, we propose a method for distinguishing GAN-generated from natural images by exploiting inconsistencies among spectral bands, with specific focus on the generation of synthetic face images. Specifically, we use cross-band co-occurrence matrices, in addition to spatial co-occurrence matrices, as input to a CNN model, which is trained to distinguish between real and synthetic faces. The results of our experiments confirm the goodness of our approach which outperforms a similar detection technique based on intra-band spatial co-occurrences only. The performance gain is particularly significant with regard to robustness against post-processing, like geometric transformations, filtering and contrast manipulations.
The vulnerability of Face Recognition System (FRS) to various kind of attacks (both direct and in-direct attacks) and face morphing attacks has received a great interest from the biometric community. The goal of a morphing attack is to subvert the FRS at Automatic Border Control (ABC) gates by presenting the Electronic Machine Readable Travel Document (eMRTD) or e-passport that is obtained based on the morphed face image. Since the application process for the e-passport in the majority countries requires a passport photo to be presented by the applicant, a malicious actor and the accomplice can generate the morphed face image and to obtain the e-passport. An e-passport with a morphed face images can be used by both the malicious actor and the accomplice to cross the border as the morphed face image can be verified against both of them. This can result in a significant threat as a malicious actor can cross the border without revealing the track of his/her criminal background while the details of accomplice are recorded in the log of the access control system. This survey aims to present a systematic overview of the progress made in the area of face morphing in terms of both morph generation and morph detection. In this paper, we describe and illustrate various aspects of face morphing attacks, including different techniques for generating morphed face images but also the state-of-the-art regarding Morph Attack Detection (MAD) algorithms based on a stringent taxonomy and finally the availability of public databases, which allow to benchmark new MAD algorithms in a reproducible manner. The outcomes of competitions/benchmarking, vulnerability assessments and performance evaluation metrics are also provided in a comprehensive manner. Furthermore, we discuss the open challenges and potential future works that need to be addressed in this evolving field of biometrics.
While it is known that unconditionally secure position-based cryptography is impossible both in the classical and the quantum setting, it has been shown that some quantum protocols for position verification are secure against attackers which share a quantum state of bounded dimension. In this work, we consider the security of two protocols for quantum position verification that combine a single qubit with classical strings of total length $2n$: The qubit routing protocol, where the classical information prescribes the qubits destination, and a variant of the BB84-protocol for position verification, where the classical information prescribes in which basis the qubit should be measured. We show that either protocol is secure for a randomly chosen function if each of the attackers holds at most $n/2 - 5$ qubits. With this, we show for the first time that there exists a quantum position verification protocol where the ratio between the quantum resources an honest prover needs and the quantum resources the attackers need to break the protocol is unbounded. The verifiers need only increase the amount of classical resources to force the attackers to use more quantum resources. Concrete efficient functions for both protocols are also given -- at the expense of a weaker but still unbounded ratio of quantum resources for successful attackers. Finally, we show that both protocols are robust with respect to noise, making them appealing for applications.
Face recognition has obtained remarkable progress in recent years due to the great improvement of deep convolutional neural networks (CNNs). However, deep CNNs are vulnerable to adversarial examples, which can cause fateful consequences in real-world face recognition applications with security-sensitive purposes. Adversarial attacks are widely studied as they can identify the vulnerability of the models before they are deployed. In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model. This attack setting is more practical in real-world face recognition systems. To improve the efficiency of previous methods, we propose an evolutionary attack algorithm, which can model the local geometries of the search directions and reduce the dimension of the search space. Extensive experiments demonstrate the effectiveness of the proposed method that induces a minimum perturbation to an input face image with fewer queries. We also apply the proposed method to attack a real-world face recognition system successfully.
72 - Man Zhou , Qian Wang 2021
Face authentication usually utilizes deep learning models to verify users with high recognition accuracy. However, face authentication systems are vulnerable to various attacks that cheat the models by manipulating the digital counterparts of human faces. So far, lots of liveness detection schemes have been developed to prevent such attacks. Unfortunately, the attacker can still bypass these schemes by constructing wide-ranging sophisticated attacks. We study the security of existing face authentication services (e.g., Microsoft, Amazon, and Face++) and typical liveness detection approaches. Particularly, we develop a new type of attack, i.e., the low-cost 3D projection attack that projects manipulated face videos on a 3D face model, which can easily evade these face authentication services and liveness detection approaches. To this end, we propose FaceLip, a novel liveness detection scheme for face authentication, which utilizes unforgeable lip motion patterns built upon well-designed acoustic signals to enable a strong security guarantee. The unique lip motion patterns for each user are unforgeable because FaceLip verifies the patterns by capturing and analyzing the acoustic signals that are dynamically generated according to random challenges, which ensures that our signals for liveness detection cannot be manipulated. Specially, we develop robust algorithms for FaceLip to eliminate the impact of noisy signals in the environment and thus can accurately infer the lip motions at larger distances. We prototype FaceLip on off-the-shelf smartphones and conduct extensive experiments under different settings. Our evaluation with 44 participants validates the effectiveness and robustness of FaceLip.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا