No Arabic abstract
Recently, Multipath TCP (MPTCP) has been proposed as an alternative transport approach for datacenter networks. MPTCP provides the ability to split a flow into multiple paths thus providing better performance and resilience to failures. Usually, MPTCP is combined with flow-based Equal-Cost Multi-Path Routing (ECMP), which uses random hashing to split the MPTCP subflows over different paths. However, random hashing can be suboptimal as distinct subflows may end up using the same paths, while other available paths remain unutilized. In this paper, we explore an MPTCP-aware SDN controller that facilitates an alternative routing mechanism for the MPTCP subflows. The controller uses packet inspection to provide deterministic subflow assignment to paths. Using the controller, we show that MPTCP can deliver significantly improved performance when connections are not limited by the access links of hosts. To lessen the effect of throughput limitation due to access links, we also investigate the usage of multiple interfaces at the hosts. We demonstrate, using our modification of the MPTCP Linux Kernel, that using multiple subflows per pair of IP addresses can yield improved performance in multi-interface settings.
In this paper, the problem of vertical handover in software-defined network (SDN) based heterogeneous networks (HetNets) is studied. In the studied model, HetNets are required to offer diverse services for mobile users. Using an SDN controller, HetNets have the capability of managing users access and mobility issues but still have the problems of ping-pong effect and service interruption during vertical handover. To solve these problems, a mobility-aware seamless handover method based on multipath transmission control protocol (MPTCP) is proposed. The proposed handover method is executed in the controller of the software-defined HetNets (SDHetNets) and consists of three steps: location prediction, network selection, and handover execution. In particular, the method first predicts the users location in the next moment with an echo state network (ESN). Given the predicted location, the SDHetNet controller can determine the candidate network set for the handover to pre-allocate network wireless resources. Second, the target network is selected through fuzzy analytic hierarchical process (FAHP) algorithm, jointly considering user preferences, service requirements, network attributes, and user mobility patterns. Then, seamless handover is realized through the proposed MPTCP-based handover mechanism. Simulations using real-world user trajectory data from Korea Advanced Institute of Science & Technology show that the proposed method can reduce the handover times by 10.85% to 29.12% compared with traditional methods. The proposed method also maintains at least one MPTCP subflow connected during the handover process and achieves a seamless handover.
In Software-Defined Networking (SDN)-enabled cloud data centers, live migration is a key approach used for the reallocation of Virtual Machines (VMs) in cloud services and Virtual Network Functions (VNFs) in Service Function Chaining (SFC). Using live migration methods, cloud providers can address their dynamic resource management and fault tolerance objectives without interrupting the service of users. However, in cloud data centers, performing multiple live migrations in arbitrary order can lead to service degradation. Therefore, efficient migration planning is essential to reduce the impact of live migration overheads. In addition, to prevent Quality of Service (QoS) degradations and Service Level Agreement (SLA) violations, it is necessary to set priorities for different live migration requests with various urgency. In this paper, we propose SLAMIG, a set of algorithms that composes the deadline-aware multiple migration grouping algorithm and on-line migration scheduling to determine the sequence of VM/VNF migrations. The experimental results show that our approach with reasonable algorithm runtime can efficiently reduce the number of deadline misses and has a good migration performance compared with the one-by-one scheduling and two state-of-the-art algorithms in terms of total migration time, average execution time, downtime, and transferred data. We also evaluate and analyze the impact of multiple migration planning and scheduling on QoS and energy consumption.
Network Function Virtualization (NFV) is a promising technology that promises to significantly reduce the operational costs of network services by deploying virtualized network functions (VNFs) to commodity servers in place of dedicated hardware middleboxes. The VNFs are typically running on virtual machine instances in a cloud infrastructure, where the virtualization technology enables dynamic provisioning of VNF instances, to process the fluctuating traffic that needs to go through the network functions in a network service. In this paper, we target dynamic provisioning of enterprise network services - expressed as one or multiple service chains - in cloud datacenters, and design efficient online algorithms without requiring any information on future traffic rates. The key is to decide the number of instances of each VNF type to provision at each time, taking into consideration the server resource capacities and traffic rates between adjacent VNFs in a service chain. In the case of a single service chain, we discover an elegant structure of the problem and design an efficient randomized algorithm achieving a e/(e-1) competitive ratio. For multiple concurrent service chains, an online heuristic algorithm is proposed, which is O(1)-competitive. We demonstrate the effectiveness of our algorithms using solid theoretical analysis and trace-driven simulations.
In this work, we propose online traffic engineering as a novel approach to detect and mitigate an emerging class of stealthy Denial of Service (DoS) link-flooding attacks. Our approach exploits the Software Defined Networking (SDN) paradigm, which renders the management of network traffic more flexible through centralised flow-level control and monitoring. We implement a full prototype of our solution on an emulated SDN environment using OpenFlow to interface with the network devices. We further discuss useful insights gained from our preliminary experiments as well as a number of open research questions which constitute work in progress.
The amount of CO$_2$ emitted per kilowatt-hour on an electricity grid varies by time of day and substantially varies by location due to the types of generation. Networked collections of warehouse scale computers, sometimes called Hyperscale Computing, emit more carbon than needed if operated without regard to these variations in carbon intensity. This paper introduces Googles system for Carbon-Intelligent Compute Management, which actively minimizes electricity-based carbon footprint and power infrastructure costs by delaying temporally flexible workloads. The core component of the system is a suite of analytical pipelines used to gather the next days carbon intensity forecasts, train day-ahead demand prediction models, and use risk-aware optimization to generate the next days carbon-aware Virtual Capacity Curves (VCCs) for all datacenter clusters across Googles fleet. VCCs impose hourly limits on resources available to temporally flexible workloads while preserving overall daily capacity, enabling all such workloads to complete within a day. Data from operation shows that VCCs effectively limit hourly capacity when the grids energy supply mix is carbon intensive and delay the execution of temporally flexible workloads to greener times.