ترغب بنشر مسار تعليمي؟ اضغط هنا

Decodability Attack against the Fuzzy Commitment Scheme with Public Feature Transforms

92   0   0.0 ( 0 )
 نشر من قبل Benjamin Tams
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Benjamin Tams




اسأل ChatGPT حول البحث

The fuzzy commitment scheme is a cryptographic primitive that can be used to store biometric templates being encoded as fixed-length feature vectors protected. If multiple related records generated from the same biometric instance can be intercepted, their correspondence can be determined using the decodability attack. In 2011, Kelkboom et al. proposed to pass the feature vectors through a record-specific but public permutation process in order to prevent this attack. In this paper, it is shown that this countermeasure enables another attack also analyzed by Simoens et al. in 2009 which can even ease an adversary to fully break two related records. The attack may only be feasible if the protected feature vectors have a reasonably small Hamming distance; yet, implementations and security analyses must account for this risk. This paper furthermore discusses that by means of a public transformation, the attack cannot be prevented in a binary fuzzy commitment scheme based on linear codes. Fortunately, such transformations can be generated for the non-binary case. In order to still be able to protect binary feature vectors, one may consider to use the improved fuzzy vault scheme by Dodis et al. which may be secured against linkability attacks using observations made by Merkle and Tams.



قيم البحث

اقرأ أيضاً

Aiming for strong security assurance, recently there has been an increasing interest in formal verification of cryptographic constructions. This paper presents a mechanised formal verification of the popular Pedersen commitment protocol, proving its security properties of correctness, perfect hiding, and computational binding. To formally verify the protocol, we extended the theory of EasyCrypt, a framework which allows for reasoning in the computational model, to support the discrete logarithm and an abstraction of commitment protocols. Commitments are building blocks of many cryptographic constructions, for example, verifiable secret sharing, zero-knowledge proofs, and e-voting. Our work paves the way for the verification of those more complex constructions.
239 - Shenghui Su , Shuwang Lv 2014
This paper gives the definitions of an anomalous super-increasing sequence and an anomalous subset sum separately, proves the two properties of an anomalous super-increasing sequence, and proposes the REESSE2+ public-key encryption scheme which inclu des the three algorithms for key generation, encryption and decryption. The paper discusses the necessity and sufficiency of the lever function for preventing the Shamir extremum attack, analyzes the security of REESSE2+ against extracting a private key from a public key through the exhaustive search, recovering a plaintext from a ciphertext plus a knapsack of high density through the L3 lattice basis reduction method, and heuristically obtaining a plaintext through the meet-in-the-middle attack or the adaptive-chosen-ciphertext attack. The authors evaluate the time complexity of REESSE2+ encryption and decryption algorithms, compare REESSE2+ with ECC and NTRU, and find that the encryption speed of REESSE2+ is ten thousand times faster than ECC and NTRU bearing the equivalent security, and the decryption speed of REESSE2+ is roughly equivalent to ECC and NTRU respectively.
Speaker verification has been widely and successfully adopted in many mission-critical areas for user identification. The training of speaker verification requires a large amount of data, therefore users usually need to adopt third-party data ($e.g.$ , data from the Internet or third-party data company). This raises the question of whether adopting untrusted third-party data can pose a security threat. In this paper, we demonstrate that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data. Specifically, we design a clustering-based attack scheme where poisoned samples from different clusters will contain different triggers ($i.e.$, pre-defined utterances), based on our understanding of verification tasks. The infected models behave normally on benign samples, while attacker-specified unenrolled triggers will successfully pass the verification even if the attacker has no information about the enrolled speaker. We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification. Our approach not only provides a new perspective for designing novel attacks, but also serves as a strong baseline for improving the robustness of verification methods. The code for reproducing main results is available at url{https://github.com/zhaitongqing233/Backdoor-attack-against-speaker-verification}.
170 - Lun Wang , Zaynah Javed , Xian Wu 2021
Recent research has confirmed the feasibility of backdoor attacks in deep reinforcement learning (RL) systems. However, the existing attacks require the ability to arbitrarily modify an agents observation, constraining the application scope to simple RL systems such as Atari games. In this paper, we migrate backdoor attacks to more complex RL systems involving multiple agents and explore the possibility of triggering the backdoor without directly manipulating the agents observation. As a proof of concept, we demonstrate that an adversary agent can trigger the backdoor of the victim agent with its own action in two-player competitive RL systems. We prototype and evaluate BACKDOORL in four competitive environments. The results show that when the backdoor is activated, the winning rate of the victim drops by 17% to 37% compared to when not activated.
84 - Yiming Li , Yanjie Li , Yalei Lv 2021
Deep neural networks (DNNs) are vulnerable to the emph{backdoor attack}, which intends to embed hidden backdoors in DNNs by poisoning training data. The attacked model behaves normally on benign samples, whereas its prediction will be changed to a pa rticular target label if hidden backdoors are activated. So far, backdoor research has mostly been conducted towards classification tasks. In this paper, we reveal that this threat could also happen in semantic segmentation, which may further endanger many mission-critical applications ($e.g.$, autonomous driving). Except for extending the existing attack paradigm to maliciously manipulate the segmentation models from the image-level, we propose a novel attack paradigm, the emph{fine-grained attack}, where we treat the target label ($i.e.$, annotation) from the object-level instead of the image-level to achieve more sophisticated manipulation. In the annotation of poisoned samples generated by the fine-grained attack, only pixels of specific objects will be labeled with the attacker-specified target class while others are still with their ground-truth ones. Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data. Our method not only provides a new perspective for designing novel attacks but also serves as a strong baseline for improving the robustness of semantic segmentation methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا