No Arabic abstract
Today, network devices share buffer across priority queues to avoid drops during transient congestion. While cost-effective most of the time, this sharing can cause undesired interference among seemingly independent traffic. As a result, low-priority traffic can cause increased packet loss to high-priority traffic. Similarly, long flows can prevent the buffer from absorbing incoming bursts even if they do not share the same queue. The cause of this perhaps unintuitive outcome is that todays buffer sharing techniques are unable to guarantee isolation across (priority) queues without statically allocating buffer space. To address this issue, we designed FB, a novel buffer sharing scheme that offers strict isolation guarantees to high-priority traffic without sacrificing link utilizations. Thus, FB outperforms conventional buffer sharing algorithms in absorbing bursts while achieving on-par throughput. We show that FB is practical and runs at line-rate on existing hardware (Barefoot Tofino). Significantly, FBs operations can be approximated in non-programmable devices.
Network device syslogs are ubiquitous and abundant in modern data centers with most large data centers producing millions of messages per day. Yet, the operational information reflected in syslogs and their implications on diagnosis or management tasks are poorly understood. Prevalent approaches to understanding syslogs focus on simple correlation and abnormality detection and are often limited to detection providing little insight towards diagnosis and resolution. Towards improving data center operations, we propose and implement Log-Prophet, a system that applies a toolbox of statistical techniques and domain-specific models to mine detailed diagnoses. Log-Prophet infers causal relationships between syslog lines and constructs succinct but valuable problem graphs, summarizing root causes and their locality, including cascading problems. We validate Log-Prophet using problem tickets and through operator interviews. To demonstrate the strength of Log-Prophet, we perform an initial longitudinal study of a large online service providers data center. Our study demonstrates that Log-Prophet significantly reduces the number of alerts while highlighting interesting operational issues.
In recent years, many techniques have been developed to improve the performance and efficiency of data center networks. While these techniques provide high accuracy, they are often designed using heuristics that leverage domain-specific properties of the workload or hardware. In this vision paper, we argue that many data center networking techniques, e.g., routing, topology augmentation, energy savings, with diverse goals actually share design and architectural similarity. We present a design for developing general intermediate representations of network topologies using deep learning that is amenable to solving classes of data center problems. We develop a framework, DeepConfig, that simplifies the processing of configuring and training deep learning agents that use the intermediate representation to learns different tasks. To illustrate the strength of our approach, we configured, implemented, and evaluated a DeepConfig-Agent that tackles the data center topology augmentation problem. Our initial results are promising --- DeepConfig performs comparably to the optimal.
Data centres are growing in numbers and size, and their networks expanding to carry larger amounts of traffic. The traffic profile is constantly varying, particularly in cloud data centres where tenants arrive, leave, and may change their resource requirements in between, and so the network configuration must change at a commensurate rate. Software-Defined Networking - programmatic control of network configuration - has been critical to meeting the demands of modern data centre network management, and has been the subject of intense focus by the research community, working in conjunction with industry. In this survey, we review Software-Defined Networking research targeting the management and operation of data centre networks.
Load Balancing plays a vital role in modern data centers to distribute traffic among instances of network functions or services. State-of-the-art load balancers such as Silkroad dispatch traffic obliviously without considering the real-time utilization of service instances and therefore can lead to uneven load distribution and suboptimal performance. In this paper, we design and implement Spotlight, a scalable and distributed load balancing architecture that maintains connection-to-instance mapping consistency at the edge of data center networks. Spotlight uses a new stateful flow dispatcher which periodically polls instances load and dispatches incoming connections to instances in proportion to their available capacity. Our design utilizes distributed control plane and in-band flow dispatching and thus scales horizontally in data center networks. Through extensive flow-level simulation and packet-level experiments on a testbed, we demonstrate that compared to existing methods Spotlight distributes the traffic more efficiently and has near-optimum performance in terms of overall service utilization. Moreover, Spotlight is not sensitive to utilization polling interval and therefore can be implemented with low polling frequency to reduce the amount of control traffic. Indeed, Spotlight achieves the mentioned performance improvements using O(100ms) polling interval.
In this paper we propose Virtuoso, a purely software-based multi-path RDMA solution for data center networks (DCNs) to effectively utilize the rich multi-path topology for load balancing and reliability. As a middleware library operating at the user space, Virtuoso employs three innovative mechanisms to achieve its goal. In contrast to existing hardware-based MP-RDMA solution, Virtuoso can be readily deployed in DCNs with existing RDMA NICs. It also decouples path selection and load balancing mechanisms from hardware features, allowing DCN operators and applications to make flexible decisions by employing the best mechanisms (as plug-in software library modules) as needed. Our experiments show that Virtuoso is capable of fully utilizing multiple paths with negligible CPU overheads