ترغب بنشر مسار تعليمي؟ اضغط هنا

Flexible Network Bandwidth and Latency Provisioning in the Datacenter

205   0   0.0 ( 0 )
 نشر من قبل Vimalkumar Jeyakumar
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Predictably sharing the network is critical to achieving high utilization in the datacenter. Past work has focussed on providing bandwidth to endpoints, but often we want to allocate resources among multi-node services. In this paper, we present Parley, which provides service-centric minimum bandwidth guarantees, which can be composed hierarchically. Parley also supports service-centric weighted sharing of bandwidth in excess of these guarantees. Further, we show how to configure these policies so services can get low latencies even at high network load. We evaluate Parley on a multi-tiered oversubscribed network connecting 90 machines, each with a 10Gb/s network interface, and demonstrate that Parley is able to meet its goals.

قيم البحث

اقرأ أيضاً

126 - Jiayi Song , Roch Guerin , 2021
Datacenters have become a significant source of traffic, much of which is carried over private networks. The operators of those networks commonly have access to detailed traffic profiles and performance goals, which they seek to meet as efficiently a s possible. Of interest are solutions for offering latency guarantees while minimizing the required network bandwidth. Of particular interest is the extent to which traffic (re)shaping can be of benefit. The paper focuses on the most basic network configuration, namely, a single node, single link network, with extensions to more general, multi-node networks discussed in a companion paper. The main results are in the form of optimal solutions for different types of schedulers of varying complexity, and therefore cost. The results demonstrate how judicious traffic shaping can help lower complexity schedulers reduce the bandwidth they require, often performing as well as more complex ones.
Multi-port memory controllers (MPMCs) have become increasingly important in many modern applications due to the tremendous growth in bandwidth requirement. Many approaches so far have focused on improving either the memory access latency or the bandw idth utilization for specific applications. Moreover, the application systems are likely to require certain adjustments to connect with an MPMC, since the MPMC interface is limited to a single-clock and single data-width domain. In this paper, we propose efficient techniques to improve the flexibility, latency, and bandwidth of an MPMC. Firstly, MPMC interfaces employ a pair of dual-clock dual-port FIFOs at each port, so any multi-clock multi-data-width application system can connect to an MPMC without requiring extra resources. Secondly, memory access latency is significantly reduced because parallel FIFOs temporarily keep the data transfer between the application system and memory. Lastly, a proposed arbitration scheme, namely window-based first-come-first-serve, considerably enhances the bandwidth utilization. Depending on the applications, MPMC can be properly configured by updating several internal configuration registers. The experimental results in an Altera Cyclone FPGA prove that MPMC is fully operational at 150 MHz and supports up to 32 concurrent connections at various clocks and data widths. More significantly, achieved bandwidth utilization is approximately 93.2% of the theoretical bandwidth, and the access latency is minimized as compared to previous designs.
A novel intelligent bandwidth allocation scheme in NG-EPON using reinforcement learning is proposed and demonstrated for latency management. We verify the capability of the proposed scheme under both fixed and dynamic traffic loads scenarios to achie ve <1ms average latency. The RL agent demonstrates an efficient intelligent mechanism to manage the latency, which provides a promising IBA solution for the next-generation access network.
139 - Anqi Huang , Yingyu Li , Yong Xiao 2020
Network slicing has been considered as one of the key enablers for 5G to support diversified services and application scenarios. This paper studies the distributed network slicing utilizing both the spectrum resource offered by communication network and computational resources of a coexisting fog computing network. We propose a novel distributed framework based on a new control plane entity, regional orchestrator (RO), which can be deployed between base stations (BSs) and fog nodes to coordinate and control their bandwidth and computational resources. We propose a distributed resource allocation algorithm based on Alternating Direction Method of Multipliers with Partial Variable Splitting (DistADMM-PVS). We prove that the proposed algorithm can minimize the average latency of the entire network and at the same time guarantee satisfactory latency performance for every supported type of service. Simulation results show that the proposed algorithm converges much faster than some other existing algorithms. The joint network slicing with both bandwidth and computational resources can offer around 15% overall latency reduction compared to network slicing with only a single resource.
We propose that clusters interconnected with network topologies having minimal mean path length will increase their overall performance for a variety of applications. We approach our heuristic by constructing clusters of up to 36 nodes having Dragonf ly, torus, ring, Chvatal, Wagner, Bidiakis and several other topologies with minimal mean path lengths and by simulating the performance of 256-node clusters with the same network topologies. The optimal (or sub-optimal) low-latency network topologies are found by minimizing the mean path length of regular graphs. The selected topologies are benchmarked using ping-pong messaging, the MPI collective communications, and the standard parallel applications including effective bandwidth, FFTE, Graph 500 and NAS parallel benchmarks. We established strong correlations between the clusters performances and the network topologies, especially the mean path lengths, for a wide range of applications. In communication-intensive benchmarks, clusters with optimal network topologies out-perform those with mainstream topologies by several folds. It is striking that a mere adjustment of the network topology suffices to reclaim performance from the same computing hardware.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا