ترغب بنشر مسار تعليمي؟ اضغط هنا

When Wireless Security Meets Machine Learning: Motivation, Challenges, and Research Directions

200   0   0.0 ( 0 )
 نشر من قبل Tugba Erpek
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Wireless systems are vulnerable to various attacks such as jamming and eavesdropping due to the shared and broadcast nature of wireless medium. To support both attack and defense strategies, machine learning (ML) provides automated means to learn from and adapt to wireless communication characteristics that are hard to capture by hand-crafted features and models. This article discusses motivation, background, and scope of research efforts that bridge ML and wireless security. Motivated by research directions surveyed in the context of ML for wireless security, ML-based attack and defense solutions and emerging adversarial ML techniques in the wireless domain are identified along with a roadmap to foster research efforts in bridging ML and wireless security.



قيم البحث

اقرأ أيضاً

Artificial intelligence (AI) powered wireless networks promise to revolutionize the conventional operation and structure of current networks from network design to infrastructure management, cost reduction, and user performance improvement. Empowerin g future networks with AI functionalities will enable a shift from reactive/incident driven operations to proactive/data-driven operations. This paper provides an overview on the integration of AI functionalities in 5G and beyond networks. Key factors for successful AI integration such as data, security, and explainable AI are highlighted. We also summarize the various types of network intelligence as well as machine learning based air interface in future networks. Use case examples for the application of AI to the wireless domain are then summarized. We highlight on applications to the physical layer, mobility management, wireless security, and localization.
While machine learning and artificial intelligence have long been applied in networking research, the bulk of such works has focused on supervised learning. Recently there has been a rising trend of employing unsupervised machine learning using unstr uctured raw network data to improve network performance and provide services such as traffic engineering, anomaly detection, Internet traffic classification, and quality of service optimization. The interest in applying unsupervised learning techniques in networking emerges from their great success in other fields such as computer vision, natural language processing, speech recognition, and optimal control (e.g., for developing autonomous self-driving cars). Unsupervised learning is interesting since it can unconstrain us from the need of labeled data and manual handcrafted feature engineering thereby facilitating flexible, general, and automated methods of machine learning. The focus of this survey paper is to provide an overview of the applications of unsupervised learning in the domain of networking. We provide a comprehensive survey highlighting the recent advancements in unsupervised learning techniques and describe their applications for various learning tasks in the context of networking. We also provide a discussion on future directions and open research issues, while also identifying potential pitfalls. While a few survey papers focusing on the applications of machine learning in networking have previously been published, a survey of similar scope and breadth is missing in literature. Through this paper, we advance the state of knowledge by carefully synthesizing the insights from these survey papers while also providing contemporary coverage of recent advances.
Cellular-Vehicle to Everything (C-V2X) aims at resolving issues pertaining to the traditional usability of Vehicle to Infrastructure (V2I) and Vehicle to Vehicle (V2V) networking. Specifically, C-V2X lowers the number of entities involved in vehicula r communications and allows the inclusion of cellular-security solutions to be applied to V2X. For this, the evolvement of LTE-V2X is revolutionary, but it fails to handle the demands of high throughput, ultra-high reliability, and ultra-low latency alongside its security mechanisms. To counter this, 5G-V2X is considered as an integral solution, which not only resolves the issues related to LTE-V2X but also provides a function-based network setup. Several reports have been given for the security of 5G, but none of them primarily focuses on the security of 5G-V2X. This article provides a detailed overview of 5G-V2X with a security-based comparison to LTE-V2X. A novel Security Reflex Function (SRF)-based architecture is proposed and several research challenges are presented related to the security of 5G-V2X. Furthermore, the article lays out requirements of Ultra-Dense and Ultra-Secure (UD-US) transmissions necessary for 5G-V2X.
Federated learning (FL) allows a server to learn a machine learning (ML) model across multiple decentralized clients that privately store their own training data. In contrast with centralized ML approaches, FL saves computation to the server and does not require the clients to outsource their private data to the server. However, FL is not free of issues. On the one hand, the model updates sent by the clients at each training epoch might leak information on the clients private data. On the other hand, the model learnt by the server may be subjected to attacks by malicious clients; these security attacks might poison the model or prevent it from converging. In this paper, we first examine security and privacy attacks to FL and critically survey solutions proposed in the literature to mitigate each attack. Afterwards, we discuss the difficulty of simultaneously achieving security and privacy protection. Finally, we sketch ways to tackle this open problem and attain both security and privacy.
Along with the development of AI democratization, the machine learning approach, in particular neural networks, has been applied to wide-range applications. In different application scenarios, the neural network will be accelerated on the tailored co mputing platform. The acceleration of neural networks on classical computing platforms, such as CPU, GPU, FPGA, ASIC, has been widely studied; however, when the scale of the application consistently grows up, the memory bottleneck becomes obvious, widely known as memory-wall. In response to such a challenge, advanced quantum computing, which can represent 2^N states with N quantum bits (qubits), is regarded as a promising solution. It is imminent to know how to design the quantum circuit for accelerating neural networks. Most recently, there are initial works studying how to map neural networks to actual quantum processors. To better understand the state-of-the-art design and inspire new design methodology, this paper carries out a case study to demonstrate an end-to-end implementation. On the neural network side, we employ the multilayer perceptron to complete image classification tasks using the standard and widely used MNIST dataset. On the quantum computing side, we target IBM Quantum processors, which can be programmed and simulated by using IBM Qiskit. This work targets the acceleration of the inference phase of a trained neural network on the quantum processor. Along with the case study, we will demonstrate the typical procedure for mapping neural networks to quantum circuits.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا