ترغب بنشر مسار تعليمي؟ اضغط هنا

Wireless Side-Lobe Eavesdropping Attacks

90   0   0.0 ( 0 )
 نشر من قبل Yanzi Zhu
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Millimeter-wave wireless networks offer high throughput and can (ideally) prevent eavesdropping attacks using narrow, directional beams. Unfortunately, imperfections in physical hardware mean todays antenna arrays all exhibit side lobes, signals that carry the same sensitive data as the main lobe. Our work presents results of the first experimental study of the security properties of mmWave transmissions against side-lobe eavesdropping attacks. We show that these attacks on mmWave links are highly effective in both indoor and outdoor settings, and they cannot be eliminated by improved hardware or currently proposed defenses.

قيم البحث

اقرأ أيضاً

We assume that a buffer-aided transmitter communicates with a receiving node in the presence of an attacker. We investigate the impact of a radio-frequency energy-harvesting attacker that probabilistically operates as a jammer or an eavesdropper. We show that even without the need for an external energy source, the attacker can still degrade the security of the legitimate system. We show that the random data arrival behavior at the transmitter and the channel randomness of the legitimate link can improve the systems security. Moreover, we design a jamming scheme for the attacker and investigate its impact on the secure throughput of the legitimate system. The attacker designs his power splitting parameter and jamming/eavesdropping probability based on the energy state of the attackers battery to minimize the secure throughput of the legitimate system.
The interplay between security and reliability is poorly understood. This paper shows how triple modular redundancy affects a side-channel attack (SCA). Our counterintuitive findings show that modular redundancy can increase SCA resiliency.
221 - Aldar C-F. Chan 2011
The congestion control algorithm of TCP relies on correct feedback from the receiver to determine the rate at which packets should be sent into the network. Hence, correct receiver feedback (in the form of TCP acknowledgements) is essential to the go al of sharing the scarce bandwidth resources fairly and avoiding congestion collapse in the Internet. However, the assumption that a TCP receiver can always be trusted (to generate feedback correctly) no longer holds as there are plenty of incentives for a receiver to deviate from the protocol. In fact, it has been shown that a misbehaving receiver (whose aim is to bring about congestion collapse) can easily generate acknowledgements to conceal packet loss, so as to drive a number of honest, innocent senders arbitrarily fast to create a significant number of non-responsive packet flows, leading to denial of service to other Internet users. We give the first formal treatment to this problem. We also give an efficient, provably secure mechanism to force a receiver to generate feedback correctly; any incorrect acknowledgement will be detected at the sender and cheating TCP receivers would be identified. The idea is as follows: for each packet sent, the sender generates a tag using a secret key (known to himself only); the receiver could generate a proof using the packet and the tag alone, and send it to the sender; the sender can then verify the proof using the secret key; an incorrect proof would indicate a cheating receiver. The scheme is very efficient in the sense that the TCP sender does not need to store the packet or the tag, and the proofs for multiple packets can be aggregated at the receiver. The scheme is based on an aggregate authenticator. In addition, the proposed solution can be applied to network-layer rate-limiting architectures requiring correct feedback.
In recent years, various deep learning techniques have been exploited in side channel attacks, with the anticipation of obtaining more appreciable attack results. Most of them concentrate on improving network architectures or putting forward novel al gorithms, assuming that there are adequate profiling traces available to train an appropriate neural network. However, in practical scenarios, profiling traces are probably insufficient, which makes the network learn deficiently and compromises attack performance. In this paper, we investigate a kind of data augmentation technique, called mixup, and first propose to exploit it in deep-learning based side channel attacks, for the purpose of expanding the profiling set and facilitating the chances of mounting a successful attack. We perform Correlation Power Analysis for generated traces and original traces, and discover that there exists consistency between them regarding leakage information. Our experiments show that mixup is truly capable of enhancing attack performance especially for insufficient profiling traces. Specifically, when the size of the training set is decreased to 30% of the original set, mixup can significantly reduce acquired attacking traces. We test three mixup parameter values and conclude that generally all of them can bring about improvements. Besides, we compare three leakage models and unexpectedly find that least significant bit model, which is less frequently used in previous works, actually surpasses prevalent identity model and hamming weight model in terms of attack results.
Numerous previous works have studied deep learning algorithms applied in the context of side-channel attacks, which demonstrated the ability to perform successful key recoveries. These studies show that modern cryptographic devices are increasingly t hreatened by side-channel attacks with the help of deep learning. However, the existing countermeasures are designed to resist classical side-channel attacks, and cannot protect cryptographic devices from deep learning based side-channel attacks. Thus, there arises a strong need for countermeasures against deep learning based side-channel attacks. Although deep learning has the high potential in solving complex problems, it is vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrectly. In this paper, we propose a kind of novel countermeasures based on adversarial attacks that is specifically designed against deep learning based side-channel attacks. We estimate several models commonly used in deep learning based side-channel attacks to evaluate the proposed countermeasures. It shows that our approach can effectively protect cryptographic devices from deep learning based side-channel attacks in practice. In addition, our experiments show that the new countermeasures can also resist classical side-channel attacks.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا