ترغب بنشر مسار تعليمي؟ اضغط هنا

A directional coupler attack against the Kish key distribution system

175   0   0.0 ( 0 )
 نشر من قبل Lachlan Gunn
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The Kish key distribution system has been proposed as a class ical alternative to quantum key distribution. The idealized Kish scheme elegantly promise s secure key distribution by exploiting thermal noise in a transmission line. However, we demonstrate that it is vulnerable to nonidealities in its components, such as the finite resistance of the transmission line connecting its endpoints. We introduce a novel attack against this nonideality using directional wave measurements, and experimentally demonstrate its efficacy. Our attack is based on causality: in a spatially distributed system, propagation is needed for thermodynamic equilibration, and that leaks information.

قيم البحث

اقرأ أيضاً

In real-life implementations of quantum key distribution (QKD), the physical systems with unwanted imperfections would be exploited by an eavesdropper. Based on imperfections in the detectors, detector control attacks have been successfully launched on several QKD systems, and attracted widespread concerns. Here, we propose a robust countermeasure against these attacks just by introducing a variable attenuator in front of the detector. This countermeasure is not only effective against the attacks with blinding light, but also robust against the attacks without blinding light which are more concealed and threatening. Different from previous technical improvements, the single photon detector in our countermeasure model is treated as a blackbox, and the eavesdropper can be detected by statistics of the detection and error rates of the QKD system. Besides theoretical proof, the countermeasure is also supported by an experimental demonstration. Our countermeasure is general in sense that it is independent of the technical details of the detector, and can be easily applied to the existing QKD systems.
In the quantum version of a Trojan-horse attack, photons are injected into the optical modules of a quantum key distribution system in an attempt to read information direct from the encoding devices. To stop the Trojan photons, the use of passive opt ical components has been suggested. However, to date, there is no quantitative bound that specifies such components in relation to the security of the system. Here, we turn the Trojan-horse attack into an information leakage problem. This allows us quantify the system security and relate it to the specification of the optical elements. The analysis is supported by the experimental characterization, within the operation regime, of reflectivity and transmission of the optical components most relevant to security.
Speaker verification has been widely and successfully adopted in many mission-critical areas for user identification. The training of speaker verification requires a large amount of data, therefore users usually need to adopt third-party data ($e.g.$ , data from the Internet or third-party data company). This raises the question of whether adopting untrusted third-party data can pose a security threat. In this paper, we demonstrate that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data. Specifically, we design a clustering-based attack scheme where poisoned samples from different clusters will contain different triggers ($i.e.$, pre-defined utterances), based on our understanding of verification tasks. The infected models behave normally on benign samples, while attacker-specified unenrolled triggers will successfully pass the verification even if the attacker has no information about the enrolled speaker. We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification. Our approach not only provides a new perspective for designing novel attacks, but also serves as a strong baseline for improving the robustness of verification methods. The code for reproducing main results is available at url{https://github.com/zhaitongqing233/Backdoor-attack-against-speaker-verification}.
170 - Lun Wang , Zaynah Javed , Xian Wu 2021
Recent research has confirmed the feasibility of backdoor attacks in deep reinforcement learning (RL) systems. However, the existing attacks require the ability to arbitrarily modify an agents observation, constraining the application scope to simple RL systems such as Atari games. In this paper, we migrate backdoor attacks to more complex RL systems involving multiple agents and explore the possibility of triggering the backdoor without directly manipulating the agents observation. As a proof of concept, we demonstrate that an adversary agent can trigger the backdoor of the victim agent with its own action in two-player competitive RL systems. We prototype and evaluate BACKDOORL in four competitive environments. The results show that when the backdoor is activated, the winning rate of the victim drops by 17% to 37% compared to when not activated.
84 - Yiming Li , Yanjie Li , Yalei Lv 2021
Deep neural networks (DNNs) are vulnerable to the emph{backdoor attack}, which intends to embed hidden backdoors in DNNs by poisoning training data. The attacked model behaves normally on benign samples, whereas its prediction will be changed to a pa rticular target label if hidden backdoors are activated. So far, backdoor research has mostly been conducted towards classification tasks. In this paper, we reveal that this threat could also happen in semantic segmentation, which may further endanger many mission-critical applications ($e.g.$, autonomous driving). Except for extending the existing attack paradigm to maliciously manipulate the segmentation models from the image-level, we propose a novel attack paradigm, the emph{fine-grained attack}, where we treat the target label ($i.e.$, annotation) from the object-level instead of the image-level to achieve more sophisticated manipulation. In the annotation of poisoned samples generated by the fine-grained attack, only pixels of specific objects will be labeled with the attacker-specified target class while others are still with their ground-truth ones. Experiments show that the proposed methods can successfully attack semantic segmentation models by poisoning only a small proportion of training data. Our method not only provides a new perspective for designing novel attacks but also serves as a strong baseline for improving the robustness of semantic segmentation methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا