ترغب بنشر مسار تعليمي؟ اضغط هنا

When Machine Learning Meets Wireless Cellular Networks: Deployment, Challenges, and Applications

98   0   0.0 ( 0 )
 نشر من قبل Ursula Challita
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Artificial intelligence (AI) powered wireless networks promise to revolutionize the conventional operation and structure of current networks from network design to infrastructure management, cost reduction, and user performance improvement. Empowering future networks with AI functionalities will enable a shift from reactive/incident driven operations to proactive/data-driven operations. This paper provides an overview on the integration of AI functionalities in 5G and beyond networks. Key factors for successful AI integration such as data, security, and explainable AI are highlighted. We also summarize the various types of network intelligence as well as machine learning based air interface in future networks. Use case examples for the application of AI to the wireless domain are then summarized. We highlight on applications to the physical layer, mobility management, wireless security, and localization.


قيم البحث

اقرأ أيضاً

Wireless systems are vulnerable to various attacks such as jamming and eavesdropping due to the shared and broadcast nature of wireless medium. To support both attack and defense strategies, machine learning (ML) provides automated means to learn fro m and adapt to wireless communication characteristics that are hard to capture by hand-crafted features and models. This article discusses motivation, background, and scope of research efforts that bridge ML and wireless security. Motivated by research directions surveyed in the context of ML for wireless security, ML-based attack and defense solutions and emerging adversarial ML techniques in the wireless domain are identified along with a roadmap to foster research efforts in bridging ML and wireless security.
Future wireless networks are expected to evolve towards an intelligent and software reconfigurable paradigm enabling ubiquitous communications between humans and mobile devices. They will be also capable of sensing, controlling, and optimizing the wi reless environment to fulfill the visions of low-power, high-throughput, massively-connected, and low-latency communications. A key conceptual enabler that is recently gaining increasing popularity is the Holographic Multiple Input Multiple Output Surface (HMIMOS) that refers to a low-cost transformative wireless planar structure comprising of sub-wavelength metallic or dielectric scattering particles, which is capable of impacting electromagnetic waves according to desired objectives. In this article, we provide an overview of HMIMOS communications by introducing the available hardware architectures for reconfigurable such metasurfaces and their main characteristics, as well as highlighting the opportunities and key challenges in designing HMIMOS-enabled communications.
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes. Individual nodes decide their optimal states with distributed coordination a mong other nodes through randomly varying backhaul links. This poses a technical challenge in distributed universal optimization policy robust to a random topology of the wireless network, which has not been properly addressed by conventional deep neural networks (DNNs) with rigid structural configurations. We develop a flexible DNN formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology. A key enabler of this approach is an iterative message-sharing strategy through arbitrarily connected backhaul links. The DMPNN provides a convergent solution for iterative coordination by learning numerous random backhaul interactions. The DMPNN is investigated for various configurations of the power control in wireless networks, and intensive numerical results prove its universality and viability over conventional optimization and DNN approaches.
Wireless power transfer (WPT) is an emerging paradigm that will enable using wireless to its full potential in future networks, not only to convey information but also to deliver energy. Such networks will enable trillions of future low-power devices to sense, compute, connect, and energize anywhere, anytime, and on the move. The design of such future networks brings new challenges and opportunities for signal processing, machine learning, sensing, and computing so as to make the best use of the RF radiations, spectrum, and network infrastructure in providing cost-effective and real-time power supplies to wireless devices and enable wireless-powered applications. In this paper, we first review recent signal processing techniques to make WPT and wireless information and power transfer as efficient as possible. Topics include power amplifier and energy harvester nonlinearities, active and passive beamforming, intelligent reflecting surfaces, receive combining with multi-antenna harvester, modulation, coding, waveform, massive MIMO, channel acquisition, transmit diversity, multi-user power region characterization, coordinated multipoint, and distributed antenna systems. Then, we overview two different design methodologies: the model and optimize approach relying on analytical system models, modern convex optimization, and communication theory, and the learning approach based on data-driven end-to-end learning and physics-based learning. We discuss the pros and cons of each approach, especially when accounting for various nonlinearities in wireless-powered networks, and identify interesting emerging opportunities for the approaches to complement each other. Finally, we identify new emerging wireless technologies where WPT may play a key role -- wireless-powered mobile edge computing and wireless-powered sensing -- arguing WPT, communication, computation, and sensing must be jointly designed.
Along with the development of AI democratization, the machine learning approach, in particular neural networks, has been applied to wide-range applications. In different application scenarios, the neural network will be accelerated on the tailored co mputing platform. The acceleration of neural networks on classical computing platforms, such as CPU, GPU, FPGA, ASIC, has been widely studied; however, when the scale of the application consistently grows up, the memory bottleneck becomes obvious, widely known as memory-wall. In response to such a challenge, advanced quantum computing, which can represent 2^N states with N quantum bits (qubits), is regarded as a promising solution. It is imminent to know how to design the quantum circuit for accelerating neural networks. Most recently, there are initial works studying how to map neural networks to actual quantum processors. To better understand the state-of-the-art design and inspire new design methodology, this paper carries out a case study to demonstrate an end-to-end implementation. On the neural network side, we employ the multilayer perceptron to complete image classification tasks using the standard and widely used MNIST dataset. On the quantum computing side, we target IBM Quantum processors, which can be programmed and simulated by using IBM Qiskit. This work targets the acceleration of the inference phase of a trained neural network on the quantum processor. Along with the case study, we will demonstrate the typical procedure for mapping neural networks to quantum circuits.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا