ترغب بنشر مسار تعليمي؟ اضغط هنا

TARDIS: Stably shifting traffic in space and time (extended version)

542   0   0.0 ( 0 )
 نشر من قبل Richard Clegg
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper describes TARDIS (Traffic Assignment and Retiming Dynamics with Inherent Stability) which is an algorithmic procedure designed to reallocate traffic within Internet Service Provider (ISP) networks. Recent work has investigated the idea of shifting traffic in time (from peak to off-peak) or in space (by using different links). This work gives a unified scheme for both time and space shifting to reduce costs. Particular attention is given to the commonly used 95th percentile pricing scheme. The work has three main innovations: firstly, introducing the Shapley Gradient, a way of comparing traffic pricing between different links at different times of day; secondly, a unified way of reallocating traffic in time and/or in space; thirdly, a continuous approximation to this system is proved to be stable. A trace-driven investigation using data from two service providers shows that the algorithm can create large savings in transit costs even when only small proportions of the traffic can be shifted.



قيم البحث

اقرأ أيضاً

As new networking paradigms emerge for different networking applications, e.g., cyber-physical systems, and different services are handled under a converged data link technology, e.g., Ethernet, certain applications with mission critical traffic cann ot coexist on the same physical networking infrastructure using traditional Ethernet packet-switched networking protocols. The IEEE 802.1Q Time Sensitive Networking (TSN) task group is developing protocol standards to provide deterministic properties on Ethernet based packet-switched networks. In particular, the IEEE 802.1Qcc, centralized management and control, and the IEEE 802.1Qbv, Time-Aware Shaper, can be used to manage and control scheduled traffic streams with periodic properties along with best-effort traffic on the same network infrastructure. In this paper, we investigate the effects of using the IEEE 802.1Qcc management protocol to accurately and precisely configure TAS enabled switches (with transmission windows governed by gate control lists (GCLs) with gate control entries (GCEs)) ensuring ultra-low latency, zero packet loss, and minimal jitter for scheduled TSN traffic. We examine both a centralized network/distributed user model (hybrid model) and a fully-distributed (decentralized) 802.1Qcc model on a typical industrial control network with the goal of maximizing scheduled traffic streams.
Dynamic circuits are well suited for applications that require predictable service with a constant bit rate for a prescribed period of time, such as cloud computing and e-science applications. Past research on upstream transmission in passive optical networks (PONs) has mainly considered packet-switched traffic and has focused on optimizing packet-level performance metrics, such as reducing mean delay. This study proposes and evaluates a dynamic circuit and packet PON (DyCaPPON) that provides dynamic circuits along with packet-switched service. DyCaPPON provides $(i)$ flexible packet-switched service through dynamic bandwidth allocation in periodic polling cycles, and $(ii)$ consistent circuit service by allocating each active circuit a fixed-duration upstream transmission window during each fixed-duration polling cycle. We analyze circuit-level performance metrics, including the blocking probability of dynamic circuit requests in DyCaPPON through a stochastic knapsack-based analysis. Through this analysis we also determine the bandwidth occupied by admitted circuits. The remaining bandwidth is available for packet traffic and we conduct an approximate analysis of the resulting mean delay of packet traffic. Through extensive numerical evaluations and verifying simulations we demonstrate the circuit blocking and packet delay trade-offs in DyCaPPON.
Cloud computing has emerged as a powerful and elastic platform for internet service hosting, yet it also draws concerns of the unpredictable performance of cloud-based services due to network congestion. To offer predictable performance, the virtual cluster abstraction of cloud services has been proposed, which enables allocation and performance isolation regarding both computing resources and network bandwidth in a simplified virtual network model. One issue arisen in virtual cluster allocation is the survivability of tenant services against physical failures. Existing works have studied virtual cluster backup provisioning with fixed primary embeddings, but have not considered the impact of primary embeddings on backup resource consumption. To address this issue, in this paper we study how to embed virtual clusters survivably in the cloud data center, by jointly optimizing primary and backup embeddings of the virtual clusters. We formally define the survivable virtual cluster embedding problem. We then propose a novel algorithm, which computes the most resource-efficient embedding given a tenant request. Since the optimal algorithm has high time complexity, we further propose a faster heuristic algorithm, which is several orders faster than the optimal solution, yet able to achieve similar performance. Besides theoretical analysis, we evaluate our algorithms via extensive simulations.
Various legacy and emerging industrial control applications create the requirement of periodic and time-sensitive communication (TSC) for 5G/6G networks. State-of-the-art semi-persistent scheduling (SPS) techniques fall short of meeting the requireme nts of this type of critical traffic due to periodicity misalignment between assignments and arriving packets that lead to significant waiting delays. To tackle this challenge, we develop a novel recursive periodicity shifting (RPS)-SPS scheme that provides an optimal scheduling policy by recursively aligning the period of assignments until the timing mismatch is minimized. RPS can be realized in 5G wireless networks with minimal modifications to the scheduling framework. Performance evaluation shows the effectiveness of the proposed scheme in terms of minimizing misalignment delay with arbitrary traffic periodicity.
We study how to design edge server placement and server scheduling policies under workload uncertainty for 5G networks. We introduce a new metric called resource pooling factor to handle unexpected workload bursts. Maximizing this metric offers a str ong enhancement on top of robust optimization against workload uncertainty. Using both real traces and synthetic traces, we show that the proposed server placement and server scheduling policies not only demonstrate better robustness against workload uncertainty than existing approaches, but also significantly reduce the cost of service providers. Specifically, in order to achieve close-to-zero workload rejection rate, the proposed server placement policy reduces the number of required edge servers by about 25% compared with the state-of-the-art approach; the proposed server scheduling policy reduces the energy consumption of edge servers by about 13% without causing much impact on the service quality.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا