Do you want to publish a course? Click here

Minimizing Flow Completion Times using Adaptive Routing over Inter-Datacenter Wide Area Networks

179   0   0.0 ( 0 )
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Inter-datacenter networks connect dozens of geographically dispersed datacenters and carry traffic flows with highly variable sizes and different classes. Adaptive flow routing can improve efficiency and performance by assigning paths to new flows according to network status and flow properties. A popular approach widely used for traffic engineering is based on current bandwidth utilization of links. We propose an alternative that reduces bandwidth usage by up to at least 50% and flow completion times by up to at least 40% across various scheduling policies and flow size distributions.



rate research

Read More

Long flows contribute huge volumes of traffic over inter-datacenter WAN. The Flow Completion Time (FCT) is a vital network performance metric that affects the running time of distributed applications and the users quality of experience. Flow routing techniques based on propagation or queuing latency or instantaneous link utilization are insufficient for minimization of the long flows FCT. We propose a routing approach that uses the remaining sizes and paths of all ongoing flows to minimize the worst-case completion time of incoming flows assuming no knowledge of future flow arrivals. Our approach can be formulated as an NP-Hard graph optimization problem. We propose BWRH, a heuristic to quickly generate an approximate solution. We evaluate BWRH against several real WAN topologies and two different traffic patterns. We see that BWRH provides solutions with an average optimality gap of less than $0.25%$. Furthermore, we show that compared to other popular routing heuristics, BWRH reduces the mean and tail FCT by up to $1.46times$ and $1.53times$, respectively.
Flow routing over inter-datacenter networks is a well-known problem where the network assigns a path to a newly arriving flow potentially according to the network conditions and the properties of the new flow. An essential system-wide performance metric for a routing algorithm is the flow completion times, which affect the performance of applications running across multiple datacenters. Current static and dynamic routing approaches do not take advantage of flow size information in routing, which is practical in a controlled environment such as inter-datacenter networks that are managed by the datacenter operators. In this paper, we discuss Best Worst-case Routing (BWR), which aims at optimizing the tail completion times of long-running flows over inter-datacenter networks with non-uniform link capacities. Since finding the path with the best worst-case completion time for a new flow is NP-Hard, we investigate two heuristics, BWRH and BWRHF, which use two different upper bounds on the worst-case completion times for routing. We evaluate BWRH and BWRHF against several real WAN topologies and multiple traffic patterns. Although BWRH better models the BWR problem, BWRH and BWRHF show negligible difference across various system-wide performance metrics, while BWRHF being significantly faster. Furthermore, we show that compared to other popular routing heuristics, BWRHF can reduce the mean and tail flow completion times by over $1.5times$ and $2times$, respectively.
Datacenters provide the infrastructure for cloud computing services used by millions of users everyday. Many such services are distributed over multiple datacenters at geographically distant locations possibly in different continents. These datacenters are then connected through high speed WAN links over private or public networks. To perform data backups or data synchronization operations, many transfers take place over these networks that have to be completed before a deadline in order to provide necessary service guarantees to end users. Upon arrival of a transfer request, we would like the system to be able to decide whether such a request can be guaranteed successful delivery. If yes, it should provide us with transmission schedule in the shortest time possible. In addition, we would like to avoid packet reordering at the destination as it affects TCP performance. Previous work in this area either cannot guarantee that admitted transfers actually finish before the specified deadlines or use techniques that can result in packet reordering. In this paper, we propose DCRoute, a fast and efficient routing and traffic allocation technique that guarantees transfer completion before deadlines for admitted requests. It assigns each transfer a single path to avoid packet reordering. Through simulations, we show that DCRoute is at least 200 times faster than other traffic allocation techniques based on linear programming (LP) while admitting almost the same amount of traffic to the system.
Several organizations have built multiple datacenters connected via dedicated wide area networks over which large inter-datacenter transfers take place. This includes tremendous volumes of bulk multicast traffic generated as a result of data and content replication. Although one can perform these transfers using a single multicast forwarding tree, that can lead to poor performance as the slowest receiver on each tree dictates the completion time for all receivers. Using multiple trees per transfer each connected to a subset of receivers alleviates this concern. The choice of multicast trees also determines the total bandwidth usage. To further improve the performance, bandwidth over dedicated inter-datacenter networks can be carved for different multicast trees over specific time periods to avoid congestion and minimize the average receiver completion times. In this paper, we break this problem into the three sub-problems of partitioning, tree selection, and rate allocation. We present an algorithm called QuickCast which is computationally fast and allows us to significantly speed up multiple receivers per bulk multicast transfer with control over extra bandwidth consumption. We evaluate QuickCast against a variety of synthetic and real traffic patterns as well as real WAN topologies. Compared to performing bulk multicast transfers as separate unicast transfers, QuickCast achieves up to $3.64times$ reduction in mean completion times while at the same time using $0.71times$ the bandwidth. Also, QuickCast allows the top $50%$ of receivers to complete between $3times$ to $35times$ faster on average compared with when a single forwarding multicast tree is used for data delivery.
Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for todays cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا