Do you want to publish a course? Click here

Virtual Network Migration on the GENI Wide-Area SDN-Enabled Infrastructure

60   0   0.0 ( 0 )
 Added by Yimeng Zhao
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

A virtual network (VN) contains a collection of virtual nodes and links assigned to underlying physical resources in a network substrate. VN migration is the process of remapping a VNs logical topology to a new set of physical resources to provide failure recovery, energy savings, or defense against attack. Providing VN migration that is transparent to running applications is a significant challenge. Efficient migration mechanisms are highly dependent on the technology deployed in the physical substrate. Prior work has considered migration in data centers and in the PlanetLab infrastructure. However, there has been little effort targeting an SDN-enabled wide-area networking environment - an important building block of future networking infrastructure. In this work, we are interested in the design, implementation and evaluation of VN migration in GENI as a working example of such a future network. We identify and propose techniques to address key challenges: the dynamic allocation of resources during migration, managing hosts connected to the VN, and flow table migration sequences to minimize packet loss. We find that GENIs virtualization architecture makes transparent and efficient migration challenging. We suggest alternatives that might be adopted in GENI and are worthy of adoption by virtual network providers to facilitate migration.



rate research

Read More

With the increasing demand for openness, flexibility, and monetization the Network Function Virtualization (NFV) of mobile network functions has become the embracing factor for most mobile network operators. Early reported field deployments of virtualized Evolved Packet Core (EPC) - the core network component of 4G LTE and 5G non-standalone mobile networks - reflect this growing trend. To best meet the requirements of power management, load balancing, and fault tolerance in the cloud environment, the need for live migration for these virtualized components cannot be shunned. Virtualization platforms of interest include both Virtual Machines (VMs) and Containers, with the latter option offering more lightweight characteristics. The first contribution of this paper is the implementation of a number of custom functions that enable migration of Containers supporting virtualized EPC components. The current CRIU-based migration of Docker Container does not fully support the mobile network protocol stack. CRIU extensions to support the mobile network protocol stack are therefore required and described in the paper. The second contribution is an experimental-based comprehensive analysis of live migration in two backhaul network settings and two virtualization technologies. The two backhaul network settings are the one provided by CloudLab and one based on a programmable optical network testbed that makes use of OpenROADM dense wavelength division multiplexing (DWDM) equipment. The paper compares the migration performance of the proposed implementation of OpenAirInterface (OAI) based containerized EPC components with the one utilizing VMs, running in OpenStack. The presented experimental comparison accounts for a number of system parameters and configurations, image size of the virtualized EPC components, network characteristics, and signal propagation time across the OpenROADM backhaul network.
In Software-Defined Networking (SDN)-enabled cloud data centers, live migration is a key approach used for the reallocation of Virtual Machines (VMs) in cloud services and Virtual Network Functions (VNFs) in Service Function Chaining (SFC). Using live migration methods, cloud providers can address their dynamic resource management and fault tolerance objectives without interrupting the service of users. However, in cloud data centers, performing multiple live migrations in arbitrary order can lead to service degradation. Therefore, efficient migration planning is essential to reduce the impact of live migration overheads. In addition, to prevent Quality of Service (QoS) degradations and Service Level Agreement (SLA) violations, it is necessary to set priorities for different live migration requests with various urgency. In this paper, we propose SLAMIG, a set of algorithms that composes the deadline-aware multiple migration grouping algorithm and on-line migration scheduling to determine the sequence of VM/VNF migrations. The experimental results show that our approach with reasonable algorithm runtime can efficiently reduce the number of deadline misses and has a good migration performance compared with the one-by-one scheduling and two state-of-the-art algorithms in terms of total migration time, average execution time, downtime, and transferred data. We also evaluate and analyze the impact of multiple migration planning and scheduling on QoS and energy consumption.
We introduce the real-time multi-technology transport layer monitoring to facilitate the coordinated virtualisation of optical and Ethernet networks supported by optical virtualise-able transceivers (V-BVT). A monitoring and network resource configuration scheme is proposed to include the hardware monitoring in both Ethernet and Optical layers. The scheme depicts the data and control interactions among multiple network layers under the software defined network (SDN) background, as well as the application that analyses the monitored data obtained from the database. We also present a re-configuration algorithm to adaptively modify the composition of virtual optical networks based on two criteria. The proposed monitoring scheme is experimentally demonstrated with OpenFlow (OF) extensions for a holistic (re-)configuration across both layers in Ethernet switches and V-BVTs.
Modern cloud orchestrators like Kubernetes provide a versatile and robust way to host applications at scale. One of their key features is autoscaling, which automatically adjusts cloud resources (compute, memory, storage) in order to adapt to the demands of applications. However, the scope of cloud autoscaling is limited to the datacenter hosting the cloud and it doesnt apply uniformly to the allocation of network resources. In I/O-constrained or data-in-motion use cases this can lead to severe performance degradation for the application. For example, when the load on a cloud service increases and the Wide Area Network (WAN) connecting the datacenter to the Internet becomes saturated, the application flows experience an increase in delay and loss. In many cases this is dealt with overprovisioning network capacity, which introduces additional costs and inefficiencies. On the other hand, thanks to the concept of Network as Code, the WAN exposes a set of APIs that can be used to dynamically allocate and de-allocate capacity on-demand. In this paper we propose extending the concept of cloud autoscaling into the network to address this limitation. This way, applications running in the cloud can communicate their networking requirements, like bandwidth or traffic profile, to a Software-Defined Networking (SDN) controller or Network as a Service (NaaS) platform. Moreover, we aim to define the concepts of vertical and horizontal autoscaling applied to networking. We present a prototype that automatically allocates bandwidth to the underlay network, according to the requirements of the applications hosted in Kubernetes. Finally, we discuss open research challenges.
Despite the proliferation of mobile devices in various wide-area Internet of Things applications (e.g., smart city, smart farming), current Low-Power Wide-Area Networks (LPWANs) are not designed to effectively support mobile nodes. In this paper, we propose to handle mobility in SNOW (Sensor Network Over White spaces), an LPWAN that operates in the TV white spaces. SNOW supports massive concurrent communication between a base station (BS) and numerous low-power nodes through a distributed implementation of OFDM. In SNOW, inter-carrier interference (ICI) is more pronounced under mobility due to its OFDM based design. Geospatial variation of white spaces also raises challenges in both intra- and inter-network mobility as the low-power nodes are not equipped to determine white spaces. To handle mobility impacts on ICI, we propose a dynamic carrier frequency offset estimation and compensation technique which takes into account Doppler shifts without requiring to know the speed of the nodes. We also propose to circumvent the mobility impacts on geospatial variation of white space through a mobility-aware spectrum assignment to nodes. To enable mobility of the nodes across different SNOWs, we propose an efficient handoff management through a fast and energy-efficient BS discovery and quick association with the BS by combining time and frequency domain energy-sensing. Experiments through SNOW deployments in a large metropolitan city and indoors show that our proposed approaches enable mobility across multiple different SNOWs and provide robustness in terms of reliability, latency, and energy consumption under mobility.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا