ترغب بنشر مسار تعليمي؟ اضغط هنا

Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

236   0   0.0 ( 0 )
 نشر من قبل Jingjing Wang
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.

قيم البحث

اقرأ أيضاً

Wireless Mesh Networks (WMNs) have been extensively studied for nearly two decades as one of the most promising candidates expected to power the high bandwidth, high coverage wireless networks of the future. However, consumer demand for such networks has only recently caught up, rendering efforts at optimizing WMNs to support high capacities and offer high QoS, while being secure and fault tolerant, more important than ever. To this end, a recent trend has been the application of Machine Learning (ML) to solve various design and management tasks related to WMNs. In this work, we discuss key ML techniques and analyze how past efforts have applied them in WMNs, while noting some existing issues and suggesting potential solutions. We also provide directions on how ML could advance future research and examine recent developments in the field.
Todays telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users behavioral data, etc. Advanced mat hematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions.
This paper studies an unmanned aerial vehicle (UAV)-assisted wireless network, where a UAV is dispatched to gather information from ground sensor nodes (SN) and transfer the collected data to the depot. The information freshness is captured by the ag e of information (AoI) metric, whilst the energy consumption of the UAV is seen as another performance criterion. Most importantly, the AoI and energy efficiency are inherently competing metrics, since decreasing the AoI requires the UAV returning to the depot more frequently, leading to a higher energy consumption. To this end, we design UAV paths that optimize these two competing metrics and reveal the Pareto frontier. To formulate this problem, a multi-objective mixed integer linear programming (MILP) is proposed with a flow-based constraint set and we apply Benders decomposition on the proposed formulation. The overall outcome shows that the proposed method allows deriving non-dominated solutions for decision making for UAV based wireless data collection. Numerical results are provided to corroborate our study by presenting the Pareto front of the two objectives and the effect on the UAV trajectory.
Machine learning (ML) tasks are becoming ubiquitous in todays network applications. Federated learning has emerged recently as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data. There are several challenges with employing conventional federated learning in contemporary networks, due to the significant heterogeneity in compute and communication capabilities that exist across devices. To address this, we advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers. Fog learning enhances federated learning along three major dimensions: network, heterogeneity, and proximity. It considers a multi-layer hybrid learning framework consisting of heterogeneous devices with various proximities. It accounts for the topology structures of the local networks among the heterogeneous nodes at each network layer, orchestrating them for collaborative/cooperative learning through device-to-device (D2D) communications. This migrates from star network topologies used for parameter transfers in federated learning to more distributed topologies at scale. We discuss several open research directions to realizing fog learning.
Wireless systems are vulnerable to various attacks such as jamming and eavesdropping due to the shared and broadcast nature of wireless medium. To support both attack and defense strategies, machine learning (ML) provides automated means to learn fro m and adapt to wireless communication characteristics that are hard to capture by hand-crafted features and models. This article discusses motivation, background, and scope of research efforts that bridge ML and wireless security. Motivated by research directions surveyed in the context of ML for wireless security, ML-based attack and defense solutions and emerging adversarial ML techniques in the wireless domain are identified along with a roadmap to foster research efforts in bridging ML and wireless security.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا