ﻻ يوجد ملخص باللغة العربية
The fuzzy commitment scheme is a cryptographic primitive that can be used to store biometric templates being encoded as fixed-length feature vectors protected. If multiple related records generated from the same biometric instance can be intercepted, their correspondence can be determined using the decodability attack. In 2011, Kelkboom et al. proposed to pass the feature vectors through a record-specific but public permutation process in order to prevent this attack. In this paper, it is shown that this countermeasure enables another attack also analyzed by Simoens et al. in 2009 which can even ease an adversary to fully break two related records. The attack may only be feasible if the protected feature vectors have a reasonably small Hamming distance; yet, implementations and security analyses must account for this risk. This paper furthermore discusses that by means of a public transformation, the attack cannot be prevented in a binary fuzzy commitment scheme based on linear codes. Fortunately, such transformations can be generated for the non-binary case. In order to still be able to protect binary feature vectors, one may consider to use the improved fuzzy vault scheme by Dodis et al. which may be secured against linkability attacks using observations made by Merkle and Tams.
Aiming for strong security assurance, recently there has been an increasing interest in formal verification of cryptographic constructions. This paper presents a mechanised formal verification of the popular Pedersen commitment protocol, proving its
This paper gives the definitions of an anomalous super-increasing sequence and an anomalous subset sum separately, proves the two properties of an anomalous super-increasing sequence, and proposes the REESSE2+ public-key encryption scheme which inclu
Speaker verification has been widely and successfully adopted in many mission-critical areas for user identification. The training of speaker verification requires a large amount of data, therefore users usually need to adopt third-party data ($e.g.$
Recent research has confirmed the feasibility of backdoor attacks in deep reinforcement learning (RL) systems. However, the existing attacks require the ability to arbitrarily modify an agents observation, constraining the application scope to simple
Deep neural networks (DNNs) are vulnerable to the emph{backdoor attack}, which intends to embed hidden backdoors in DNNs by poisoning training data. The attacked model behaves normally on benign samples, whereas its prediction will be changed to a pa