Do you want to publish a course? Click here

Datacenter Traffic Control: Understanding Techniques and Trade-offs

338   0   0.0 ( 0 )
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for todays cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.

rate research

Read More

Datacenters provide the infrastructure for cloud computing services used by millions of users everyday. Many such services are distributed over multiple datacenters at geographically distant locations possibly in different continents. These datacenters are then connected through high speed WAN links over private or public networks. To perform data backups or data synchronization operations, many transfers take place over these networks that have to be completed before a deadline in order to provide necessary service guarantees to end users. Upon arrival of a transfer request, we would like the system to be able to decide whether such a request can be guaranteed successful delivery. If yes, it should provide us with transmission schedule in the shortest time possible. In addition, we would like to avoid packet reordering at the destination as it affects TCP performance. Previous work in this area either cannot guarantee that admitted transfers actually finish before the specified deadlines or use techniques that can result in packet reordering. In this paper, we propose DCRoute, a fast and efficient routing and traffic allocation technique that guarantees transfer completion before deadlines for admitted requests. It assigns each transfer a single path to avoid packet reordering. Through simulations, we show that DCRoute is at least 200 times faster than other traffic allocation techniques based on linear programming (LP) while admitting almost the same amount of traffic to the system.
Inter-datacenter networks connect dozens of geographically dispersed datacenters and carry traffic flows with highly variable sizes and different classes. Adaptive flow routing can improve efficiency and performance by assigning paths to new flows according to network status and flow properties. A popular approach widely used for traffic engineering is based on current bandwidth utilization of links. We propose an alternative that reduces bandwidth usage by up to at least 50% and flow completion times by up to at least 40% across various scheduling policies and flow size distributions.
This paper studies the optimal output-feedback control of a linear time-invariant system where a stochastic event-based scheduler triggers the communication between the sensor and the controller. The primary goal of the use of this type of scheduling strategy is to provide significant reductions in the usage of the sensor-to-controller communication and, in turn, improve energy expenditure in the network. In this paper, we aim to design an admissible control policy, which is a function of the observed output, to minimize a quadratic cost function while employing a stochastic event-triggered scheduler that preserves the Gaussian property of the plant state and the estimation error. For the infinite horizon case, we present analytical expressions that quantify the trade-off between the communication cost and control performance of such event-triggered control systems. This trade-off is confirmed quantitatively via numerical examples.
Localization in long-range Internet of Things networks is a challenging task, mainly due to the long distances and low bandwidth used. Moreover, the cost, power, and size limitations restrict the integration of a GPS receiver in each device. In this work, we introduce a novel received signal strength indicator (RSSI) based localization solution for ultra narrow band (UNB) long-range IoT networks such as Sigfox. The essence of our approach is to leverage the existence of a few GPS-enabled sensors (GSNs) in the network to split the wide coverage into classes, enabling RSSI based fingerprinting of other sensors (SNs). By using machine learning algorithms at the network backed-end, the proposed approach does not impose extra power, payload, or hardware requirements. To comprehensively validate the performance of the proposed method, a measurement-based dataset that has been collected in the city of Antwerp is used. We show that a location classification accuracy of 80% is achieved by virtually splitting a city with a radius of 2.5 km into seven classes. Moreover, separating classes, by increasing the spacing between them, brings the classification accuracy up-to 92% based on our measurements. Furthermore, when the density of GSN nodes is high enough to enable device-to-device communication, using multilateration, we improve the probability of localizing SNs with an error lower than 20 m by 40% in our measurement scenario.
The use of amateur drones (ADrs) is expected to significantly increase over the upcoming years. However, regulations do not allow such drones to fly over all areas, in addition to typical altitude limitations. As a result, there is an urgent need for ADrs surveillance solutions. These solutions should include means of accurate detection, classification, and localization of the unwanted drones in a no-fly zone. In this paper, we give an overview of promising techniques for modulation classification and signal strength based localization of ADrs by using surveillance drones (SDrs). By introducing a generic altitude dependent propagation model, we show how detection and localization performance depend on the altitude of SDrs. Particularly, our simulation results show a 25 dB reduction in the minimum detectable power or 10 times coverage enhancement of an SDr by flying at the optimum altitude. Moreover, for a target no-fly zone, the location estimation error of an ADr can be remarkably reduced by optimizing the positions of the SDrs. Finally, we conclude the paper with a general discussion about the future work and possible challenges of the aerial surveillance systems.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا