ترغب بنشر مسار تعليمي؟ اضغط هنا

Half-Duplex Attack: An Effectual Attack Modelling in D2D Communication

192   0   0.0 ( 0 )
 نشر من قبل Misbah Shafi
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The visualization of future generation Wireless Communication Network WCN redirects the presumption of onward innovations, the fulfillment of user demands in the form of high data rates, energy efficiency, low latency, and long-range services. To content these demands, various technologies such as massive MIMO Multiple Input Multiple Output, UDN Ultra Dense Network, spectrum sharing, D2D Device to Device communication were improvised in the next generation WCN. In comparison to previous technologies, these technologies exhibit flat architecture, the involvement of clouds in the network, centralized architecture incorporating small cells which creates vulnerable breaches initiating menaces to the security of the network. The half-duplex attack is another threat to the WCN, where the resource spoofing mechanism is attained in the downlink phase of D2D communication. Instead of triggering an attack on both uplink and downlink, solely downlink is targeted by the attacker. This scheme allows the reduced failed attempt rate of the attacker as compared to the conventional attacks. The analysis is determined on the basis of Poissons distribution to determine the probability of failed attempts of half duplex attack in contrast to a full duplex attack

قيم البحث

اقرأ أيضاً

With the evolution of WCN (Wireless communication networks), the absolute fulfillment of security occupies the fundamental concern. In view of security, we have identified another research direction based on the attenuation impact of rain in WCN. An approach is initiated by an eavesdropper in which a secure communication environment is degraded by generating Artificial Rain (AR), which creates an abatement in the secrecy rate, and the cybersecurity gets compromised. By doing so, an attacking scenario is perceived, in which an intruder models a Half-Duplex (HD) attack. Half-Duplex specifies the attack on the downlink instead of targeting both uplink and downlink. This allows the attacker to alleviate the miss-rate of the attacking attempts. The layout for the HD attack is explained using RRC (Radio Resource Control)-setup. Further, we have determined and examined the performance parameters such as secrecy rate, energy efficiency, miss-rate, sensitivity in the presence of AR. Further comparison of rural and urban scenarios in the presence and absence of AR is carried out concerning the variation in secrecy rate with respect to the millimeter-wave frequencies and distance. Lastly, the methodology of the HD attack is simulated, revealing that the HD attack maintains a low miss rate with improved performance as compared to the performance and miss-rate attained by the full-duplex attack
With the Rise of Adversarial Machine Learning and increasingly robust adversarial attacks, the security of applications utilizing the power of Machine Learning has been questioned. Over the past few years, applications of Deep Learning using Deep Neu ral Networks(DNN) in several fields including Medical Diagnosis, Security Systems, Virtual Assistants, etc. have become extremely commonplace, and hence become more exposed and susceptible to attack. In this paper, we present a novel study analyzing the weaknesses in the security of deep learning systems. We propose Kryptonite, an adversarial attack on images. We explicitly extract the Region of Interest (RoI) for the images and use it to add imperceptible adversarial perturbations to images to fool the DNN. We test our attack on several DNNs and compare our results with state of the art adversarial attacks like Fast Gradient Sign Method (FGSM), DeepFool (DF), Momentum Iterative Fast Gradient Sign Method (MIFGSM), and Projected Gradient Descent (PGD). The results obtained by us cause a maximum drop in network accuracy while yielding minimum possible perturbation and in considerably less amount of time per sample. We thoroughly evaluate our attack against three adversarial defence techniques and the promising results showcase the efficacy of our attack.
Ransomware, a type of malicious software that encrypts a victims files and only releases the cryptographic key once a ransom is paid, has emerged as a potentially devastating class of cybercrimes in the past few years. In this paper, we present RAPTO R, a promising line of defense against ransomware attacks. RAPTOR fingerprints attackers operations to forecast ransomware activity. More specifically, our method learns features of malicious domains by looking at examples of domains involved in known ransomware attacks, and then monitors newly registered domains to identify potentially malicious ones. In addition, RAPTOR uses time series forecasting techniques to learn models of historical ransomware activity and then leverages malicious domain registrations as an external signal to forecast future ransomware activity. We illustrate RAPTORs effectiveness by forecasting all activity stages of Cerber, a popular ransomware family. By monitoring zone files of the top-level domain .top starting from August 30, 2016 through May 31, 2017, RAPTOR predicted 2,126 newly registered domains to be potential Cerber domains. Of these, 378 later actually appeared in blacklists. Our empirical evaluation results show that using predicted domain registrations helped improve forecasts of future Cerber activity. Most importantly, our approach demonstrates the value of fusing different signals in forecasting applications in the cyber domain.
237 - Bushra Sabir 2020
Background: Over the year, Machine Learning Phishing URL classification (MLPU) systems have gained tremendous popularity to detect phishing URLs proactively. Despite this vogue, the security vulnerabilities of MLPUs remain mostly unknown. Aim: To add ress this concern, we conduct a study to understand the test time security vulnerabilities of the state-of-the-art MLPU systems, aiming at providing guidelines for the future development of these systems. Method: In this paper, we propose an evasion attack framework against MLPU systems. To achieve this, we first develop an algorithm to generate adversarial phishing URLs. We then reproduce 41 MLPU systems and record their baseline performance. Finally, we simulate an evasion attack to evaluate these MLPU systems against our generated adversarial URLs. Results: In comparison to previous works, our attack is: (i) effective as it evades all the models with an average success rate of 66% and 85% for famous (such as Netflix, Google) and less popular phishing targets (e.g., Wish, JBHIFI, Officeworks) respectively; (ii) realistic as it requires only 23ms to produce a new adversarial URL variant that is available for registration with a median cost of only $11.99/year. We also found that popular online services such as Google SafeBrowsing and VirusTotal are unable to detect these URLs. (iii) We find that Adversarial training (successful defence against evasion attack) does not significantly improve the robustness of these systems as it decreases the success rate of our attack by only 6% on average for all the models. (iv) Further, we identify the security vulnerabilities of the considered MLPU systems. Our findings lead to promising directions for future research. Conclusion: Our study not only illustrate vulnerabilities in MLPU systems but also highlights implications for future study towards assessing and improving these systems.
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs), such that the prediction of infected models will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger. Currently, most exist ing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area. In this paper, we revisit this attack paradigm by analyzing trigger characteristics. We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training. As such, those attacks are far less effective in the physical world, where the location and appearance of the trigger in the digitized image may be different from that of the one used for training. Moreover, we also discuss how to alleviate such vulnerability. We hope that this work could inspire more explorations on backdoor properties, to help the design of more advanced backdoor attack and defense methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا