No Arabic abstract
In September 2020, the Broadband Forum published a new industry standard for measuring network quality. The standard centers on the notion of quality attenuation. Quality attenuation is a measure of the distribution of latency and packet loss between two points connected by a network path. A vital feature of the quality attenuation idea is that we can express detailed application requirements and network performance measurements in the same mathematical framework. Performance requirements and measurements are both modeled as latency distributions. To the best of our knowledge, existing models of the 802.11 WiFi protocol do not permit the calculation of complete latency distributions without assuming steady-state operation. We present a novel model of the WiFi protocol. Instead of computing throughput numbers from a steady-state analysis of a Markov chain, we explicitly model latency and packet loss. Explicitly modeling latency and loss allows for both transient and steady-state analysis of latency distributions, and we can derive throughput numbers from the latency results. Our model is, therefore, more general than the standard Markov chain methods. We reproduce several known results with this method. Using transient analysis, we derive bounds on WiFi throughput under the requirement that latency and packet loss must be bounded.
This paper has been withdrawn by the authors.
Ultra-low latency supported by the fifth generation (5G) give impetus to the prosperity of many wireless network applications, such as autonomous driving, robotics, telepresence, virtual reality and so on. Ultra-low latency is not achieved in a moment, but requires long-term evolution of network structure and key enabling communication technologies. In this paper, we provide an evolutionary overview of low latency in mobile communication systems, including two different evolutionary perspectives: 1) network architecture; 2) physical layer air interface technologies. We firstly describe in detail the evolution of communication network architecture from the second generation (2G) to 5G, highlighting the key points reducing latency. Moreover, we review the evolution of key enabling technologies in the physical layer from 2G to 5G, which is also aimed at reducing latency. We also discussed the challenges and future research directions for low latency in network architecture and physical layer.
Datacenters have become a significant source of traffic, much of which is carried over private networks. The operators of those networks commonly have access to detailed traffic profiles and performance goals, which they seek to meet as efficiently as possible. Of interest are solutions for offering latency guarantees while minimizing the required network bandwidth. Of particular interest is the extent to which traffic (re)shaping can be of benefit. The paper focuses on the most basic network configuration, namely, a single node, single link network, with extensions to more general, multi-node networks discussed in a companion paper. The main results are in the form of optimal solutions for different types of schedulers of varying complexity, and therefore cost. The results demonstrate how judicious traffic shaping can help lower complexity schedulers reduce the bandwidth they require, often performing as well as more complex ones.
In this paper, we study the stability of light traffic achieved by a scheduling algorithm which is suitable for heterogeneous traffic networks. Since analyzing a scheduling algorithm is intractable using the conventional mathematical tool, our goal is to minimize the largest queue-overflow probability achieved by the algorithm. In the large deviation setting, this problem is equivalent to maximizing the asymptotic decay rate of the largest queue-overflow probability. We first derive an upper bound on the decay rate of the queue overflow probability as the queue overflow threshold approaches infinity. Then, we study several structural properties of the minimum-cost-path to overflow of the queue with the largest length, which is basically equivalent to the decay rate of the largest queue-overflow probability. Given these properties, we prove that the queue with the largest length follows a sample path with linear increment. For certain parameter value, the scheduling algorithm is asymptotically optimal in reducing the largest queue length. Through numerical results, we have shown the large deviation properties of the queue length typically used in practice while varying one parameter of the algorithm.
A broadcast mode may augment peer-to-peer overlay networks with an efficient, scalable data replication function, but may also give rise to a virtual link layer in VPN-type solutions. We introduce a simple broadcasting mechanism that operates in the prefix space of distributed hash tables without signaling. This paper concentrates on the performance analysis of the prefix flooding scheme. Starting from simple models of recursive $k$-ary trees, we analytically derive distributions of hop counts and the replication load. Extensive simulation results are presented further on, based on an implementation within the OverSim framework. Comparisons are drawn to Scribe, taken as a general reference model for group communication according to the shared, rendezvous-point-centered distribution paradigm. The prefix flooding scheme thereby confirmed its widely predictable performance and consistently outperformed Scribe in all metrics. Reverse path selection in overlays is identified as a major cause of performance degradation.