ترغب بنشر مسار تعليمي؟ اضغط هنا

A Machine Learning-Based Migration Strategy for Virtual Network Function Instances

286   0   0.0 ( 0 )
 نشر من قبل Dimitrios Michael Manias
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

With the growing demand for data connectivity, network service providers are faced with the task of reducing their capital and operational expenses while simultaneously improving network performance and addressing the increased demand. Although Network Function Virtualization (NFV) has been identified as a promising solution, several challenges must be addressed to ensure its feasibility. In this paper, we address the Virtual Network Function (VNF) migration problem by developing the VNF Neural Network for Instance Migration (VNNIM), a migration strategy for VNF instances. The performance of VNNIM is further improved through the optimization of the learning rate hyperparameter through particle swarm optimization. Results show that the VNNIM is very effective in predicting the post-migration server exhibiting a binary accuracy of 99.07% and a delay difference distribution that is centered around a mean of zero when compared to the optimization model. The greatest advantage of VNNIM, however, is its run-time efficiency highlighted through a run-time analysis.

قيم البحث

اقرأ أيضاً

With the growing demand for data connectivity, network service providers are faced with the task of reducing their capital and operational expenses while simultaneously improving network performance and addressing the increased connectivity demand. A lthough Network Function Virtualization (NFV) has been identified as a solution, several challenges must be addressed to ensure its feasibility. In this paper, we address the Virtual Network Function (VNF) placement problem by developing a machine learning decision tree model that learns from the effective placement of the various VNF instances forming a Service Function Chain (SFC). The model takes several performance-related features from the network as an input and selects the placement of the various VNF instances on network servers with the objective of minimizing the delay between dependent VNF instances. The benefits of using machine learning are realized by moving away from a complex mathematical modelling of the system and towards a data-based understanding of the system. Using the Evolved Packet Core (EPC) as a use case, we evaluate our model on different data center networks and compare it to the BACON algorithm in terms of the delay between interconnected components and the total delay across the SFC. Furthermore, a time complexity analysis is performed to show the effectiveness of the model in NFV applications.
With the increasing demand for openness, flexibility, and monetization the Network Function Virtualization (NFV) of mobile network functions has become the embracing factor for most mobile network operators. Early reported field deployments of virtua lized Evolved Packet Core (EPC) - the core network component of 4G LTE and 5G non-standalone mobile networks - reflect this growing trend. To best meet the requirements of power management, load balancing, and fault tolerance in the cloud environment, the need for live migration for these virtualized components cannot be shunned. Virtualization platforms of interest include both Virtual Machines (VMs) and Containers, with the latter option offering more lightweight characteristics. The first contribution of this paper is the implementation of a number of custom functions that enable migration of Containers supporting virtualized EPC components. The current CRIU-based migration of Docker Container does not fully support the mobile network protocol stack. CRIU extensions to support the mobile network protocol stack are therefore required and described in the paper. The second contribution is an experimental-based comprehensive analysis of live migration in two backhaul network settings and two virtualization technologies. The two backhaul network settings are the one provided by CloudLab and one based on a programmable optical network testbed that makes use of OpenROADM dense wavelength division multiplexing (DWDM) equipment. The paper compares the migration performance of the proposed implementation of OpenAirInterface (OAI) based containerized EPC components with the one utilizing VMs, running in OpenStack. The presented experimental comparison accounts for a number of system parameters and configurations, image size of the virtualized EPC components, network characteristics, and signal propagation time across the OpenROADM backhaul network.
A virtual network (VN) contains a collection of virtual nodes and links assigned to underlying physical resources in a network substrate. VN migration is the process of remapping a VNs logical topology to a new set of physical resources to provide fa ilure recovery, energy savings, or defense against attack. Providing VN migration that is transparent to running applications is a significant challenge. Efficient migration mechanisms are highly dependent on the technology deployed in the physical substrate. Prior work has considered migration in data centers and in the PlanetLab infrastructure. However, there has been little effort targeting an SDN-enabled wide-area networking environment - an important building block of future networking infrastructure. In this work, we are interested in the design, implementation and evaluation of VN migration in GENI as a working example of such a future network. We identify and propose techniques to address key challenges: the dynamic allocation of resources during migration, managing hosts connected to the VN, and flow table migration sequences to minimize packet loss. We find that GENIs virtualization architecture makes transparent and efficient migration challenging. We suggest alternatives that might be adopted in GENI and are worthy of adoption by virtual network providers to facilitate migration.
Network management often relies on machine learning to make predictions about performance and security from network traffic. Often, the representation of the traffic is as important as the choice of the model. The features that the model relies on, a nd the representation of those features, ultimately determine model accuracy, as well as where and whether the model can be deployed in practice. Thus, the design and evaluation of these models ultimately requires understanding not only model accuracy but also the systems costs associated with deploying the model in an operational network. Towards this goal, this paper develops a new framework and system that enables a joint evaluation of both the conventional notions of machine learning performance (e.g., model accuracy) and the systems-level costs of different representations of network traffic. We highlight these two dimensions for two practical network management tasks, video streaming quality inference and malware detection, to demonstrate the importance of exploring different representations to find the appropriate operating point. We demonstrate the benefit of exploring a range of representations of network traffic and present Traffic Refinery, a proof-of-concept implementation that both monitors network traffic at 10 Gbps and transforms traffic in real time to produce a variety of feature representations for machine learning. Traffic Refinery both highlights this design space and makes it possible to explore different representations for learning, balancing systems costs related to feature extraction and model training against model accuracy.
With the constant increase in demand for data connectivity, network service providers are faced with the task of reducing their capital and operational expenses while ensuring continual improvements to network performance. Although Network Function V irtualization (NFV) has been identified as a solution, several challenges must be addressed to ensure its feasibility. In this paper, we present a machine learning-based solution to the Virtual Network Function (VNF) placement problem. This paper proposes the Depth-Optimized Delay-Aware Tree (DO-DAT) model by using the particle swarm optimization technique to optimize decision tree hyper-parameters. Using the Evolved Packet Core (EPC) as a use case, we evaluate the performance of the model and compare it to a previously proposed model and a heuristic placement strategy.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا