Do you want to publish a course? Click here

Morton Filters for Superior Template Protection for Iris Recognition

213   0   0.0 ( 0 )
 Added by Kiran Raja Dr
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We address the fundamental performance issues of template protection (TP) for iris verification. We base our work on the popular Bloom-Filter templates protection & address the key challenges like sub-optimal performance and low unlinkability. Specifically, we focus on cases where Bloom-filter templates results in non-ideal performance due to presence of large degradations within iris images. Iris recognition is challenged with number of occluding factors such as presence of eye-lashes within captured image, occlusion due to eyelids, low quality iris images due to motion blur. All of such degrading factors result in obtaining non-reliable iris codes & thereby provide non-ideal biometric performance. These factors directly impact the protected templates derived from iris images when classical Bloom-filters are employed. To this end, we propose and extend our earlier ideas of Morton-filters for obtaining better and reliable templates for iris. Morton filter based TP for iris codes is based on leveraging the intra and inter-class distribution by exploiting low-rank iris codes to derive the stable bits across iris images for a particular subject and also analyzing the discriminable bits across various subjects. Such low-rank non-noisy iris codes enables realizing the template protection in a superior way which not only can be used in constrained setting, but also in relaxed iris imaging. We further extend the work to analyze the applicability to VIS iris images by employing a large scale public iris image database - UBIRIS(v1 & v2), captured in a unconstrained setting. Through a set of experiments, we demonstrate the applicability of proposed approach and vet the strengths and weakness. Yet another contribution of this work stems in assessing the security of the proposed approach where factors of Unlinkability is studied to indicate the antagonistic nature to relaxed iris imaging scenarios.



rate research

Read More

In this paper we present a framework for secure identification using deep neural networks, and apply it to the task of template protection for face authentication. We use deep convolutional neural networks (CNNs) to learn a mapping from face images to maximum entropy binary (MEB) codes. The mapping is robust enough to tackle the problem of exact matching, yielding the same code for new samples of a user as the code assigned during training. These codes are then hashed using any hash function that follows the random oracle model (like SHA-512) to generate protected face templates (similar to text based password protection). The algorithm makes no unrealistic assumptions and offers high template security, cancelability, and state-of-the-art matching performance. The efficacy of the approach is shown on CMU-PIE, Extended Yale B, and Multi-PIE face databases. We achieve high (~95%) genuine accept rates (GAR) at zero false accept rate (FAR) with up to 1024 bits of template security.
Augmented reality (AR) or mixed reality (MR) platforms require spatial understanding to detect objects or surfaces, often including their structural (i.e. spatial geometry) and photometric (e.g. color, and texture) attributes, to allow applications to place virtual or synthetic objects seemingly anchored on to real world objects; in some cases, even allowing interactions between the physical and virtual objects. These functionalities require AR/MR platforms to capture the 3D spatial information with high resolution and frequency; however, these pose unprecedented risks to user privacy. Aside from objects being detected, spatial information also reveals the location of the user with high specificity, e.g. in which part of the house the user is. In this work, we propose to leverage spatial generalizations coupled with conservative releasing to provide spatial privacy while maintaining data utility. We designed an adversary that builds up on existing place and shape recognition methods over 3D data as attackers to which the proposed spatial privacy approach can be evaluated against. Then, we simulate user movement within spaces which reveals more of their space as they move around utilizing 3D point clouds collected from Microsoft HoloLens. Results show that revealing no more than 11 generalized planes--accumulated from successively revealed spaces with large enough radius, i.e. $rleq1.0m$--can make an adversary fail in identifying the spatial location of the user for at least half of the time. Furthermore, if the accumulated spaces are of smaller radius, i.e. each successively revealed space is $rleq 0.5m$, we can release up to 29 generalized planes while enjoying both better data utility and privacy.
130 - Sheng Li , Xin Chen , Zhigao Zheng 2017
In this paper, we propose a novel scheme for data hiding in the fingerprint minutiae template, which is the most popular in fingerprint recognition systems. Various strategies are proposed in data embedding in order to maintain the accuracy of fingerprint recognition as well as the undetectability of data hiding. In bits replacement based data embedding, we replace the last few bits of each element of the original minutiae template with the data to be hidden. This strategy can be further improved using an optimized bits replacement based data embedding, which is able to minimize the impact of data hiding on the performance of fingerprint recognition. The third strategy is an order preserving mechanism which is proposed to reduce the detectability of data hiding. By using such a mechanism, it would be difficult for the attacker to differentiate the minutiae template with hidden data from the original minutiae templates. The experimental results show that the proposed data hiding scheme achieves sufficient capacity for hiding common personal data, where the accuracy of fingerprint recognition is acceptable after the data hiding.
The distinctiveness of the human iris has been measured by first extracting a set of features from the iris, an encoding, and then comparing these encoded feature sets to determine how distinct they are from one another. For example, John Daugman measures the distinctiveness of the human iris at 244 degrees of freedom, that is, Daugmans encoding maps irises into the equivalent of 2 ^ 244 distinct possibilities [2]. This paper shows by direct pixel-by-pixel comparison of high-quality iris images that the inherent number of degrees of freedom embodied in the human iris, independent of any encoding, is at least 536. When the resolution of these images is gradually reduced, the number of degrees of freedom decreases smoothly to 123 for the lowest resolution images tested.
Augmented and virtual reality is being deployed in different fields of applications. Such applications might involve accessing or processing critical and sensitive information, which requires strict and continuous access control. Given that Head-Mounted Displays (HMD) developed for such applications commonly contains internal cameras for gaze tracking purposes, we evaluate the suitability of such setup for verifying the users through iris recognition. In this work, we first evaluate a set of iris recognition algorithms suitable for HMD devices by investigating three well-established handcrafted feature extraction approaches, and to complement it, we also present the analysis using four deep learning models. While taking into consideration the minimalistic hardware requirements of stand-alone HMD, we employ and adapt a recently developed miniature segmentation model (EyeMMS) for segmenting the iris. Further, to account for non-ideal and non-collaborative capture of iris, we define a new iris quality metric that we termed as Iris Mask Ratio (IMR) to quantify the iris recognition performance. Motivated by the performance of iris recognition, we also propose the continuous authentication of users in a non-collaborative capture setting in HMD. Through the experiments on a publicly available OpenEDS dataset, we show that performance with EER = 5% can be achieved using deep learning methods in a general setting, along with high accuracy for continuous user authentication.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا