No Arabic abstract
Datacenters have become a significant source of traffic, much of which is carried over private networks. The operators of those networks commonly have access to detailed traffic profiles and performance goals, which they seek to meet as efficiently as possible. Of interest are solutions for offering latency guarantees while minimizing the required network bandwidth. Of particular interest is the extent to which traffic (re)shaping can be of benefit. The paper focuses on the most basic network configuration, namely, a single node, single link network, with extensions to more general, multi-node networks discussed in a companion paper. The main results are in the form of optimal solutions for different types of schedulers of varying complexity, and therefore cost. The results demonstrate how judicious traffic shaping can help lower complexity schedulers reduce the bandwidth they require, often performing as well as more complex ones.
Predictably sharing the network is critical to achieving high utilization in the datacenter. Past work has focussed on providing bandwidth to endpoints, but often we want to allocate resources among multi-node services. In this paper, we present Parley, which provides service-centric minimum bandwidth guarantees, which can be composed hierarchically. Parley also supports service-centric weighted sharing of bandwidth in excess of these guarantees. Further, we show how to configure these policies so services can get low latencies even at high network load. We evaluate Parley on a multi-tiered oversubscribed network connecting 90 machines, each with a 10Gb/s network interface, and demonstrate that Parley is able to meet its goals.
In this paper, we propose a novel resource management scheme that jointly allocates the transmit power and computational resources in a centralized radio access network architecture. The network comprises a set of computing nodes to which the requested tasks of different users are offloaded. The optimization problem minimizes the energy consumption of task offloading while takes the end-to-end latency, i.e., the transmission, execution, and propagation latencies of each task, into account. We aim to allocate the transmit power and computational resources such that the maximum acceptable latency of each task is satisfied. Since the optimization problem is non-convex, we divide it into two sub-problems, one for transmit power allocation and another for task placement and computational resource allocation. Transmit power is allocated via the convex-concave procedure. In addition, a heuristic algorithm is proposed to jointly manage computational resources and task placement. We also propose a feasibility analysis that finds a feasible subset of tasks. Furthermore, a disjoint method that separately allocates the transmit power and the computational resources is proposed as the baseline of comparison. A lower bound on the optimal solution of the optimization problem is also derived based on exhaustive search over task placement decisions and utilizing Karush-Kuhn-Tucker conditions. Simulation results show that the joint method outperforms the disjoint method in terms of acceptance ratio. Simulations also show that the optimality gap of the joint method is less than 5%.
This paper has been withdrawn
In this work, we consider the problem of jointly minimizing the average cost of sampling and transmitting status updates by users over a wireless channel subject to average Age of Information (AoI) constraints. Errors in the transmission may occur and a scheduling policy has to decide if the users sample a new packet or attempt for retransmission of the packet sampled previously. The cost consists of both sampling and transmission costs. The sampling of a new packet after a failure imposes an additional cost on the system. We formulate a stochastic optimization problem with the average cost in the objective under average AoI constraints. To solve this problem, we propose three scheduling policies; a) a dynamic policy, that is centralized and requires full knowledge of the state of the system, b) two stationary randomized policies that require no knowledge of the state of the system. We utilize tools from Lyapunov optimization theory in order to provide the dynamic policy, and we prove that its solution is arbitrary close to the optimal one. In order to provide the randomized policies, we model the system by utilizing Discrete Time Markov Chain (DTMC). We provide closed-form and approximated expressions for the average AoI and its distribution, for each randomized policy. Simulation results show the importance of providing the option to transmit an old packet in order to minimize the total average cost.
In September 2020, the Broadband Forum published a new industry standard for measuring network quality. The standard centers on the notion of quality attenuation. Quality attenuation is a measure of the distribution of latency and packet loss between two points connected by a network path. A vital feature of the quality attenuation idea is that we can express detailed application requirements and network performance measurements in the same mathematical framework. Performance requirements and measurements are both modeled as latency distributions. To the best of our knowledge, existing models of the 802.11 WiFi protocol do not permit the calculation of complete latency distributions without assuming steady-state operation. We present a novel model of the WiFi protocol. Instead of computing throughput numbers from a steady-state analysis of a Markov chain, we explicitly model latency and packet loss. Explicitly modeling latency and loss allows for both transient and steady-state analysis of latency distributions, and we can derive throughput numbers from the latency results. Our model is, therefore, more general than the standard Markov chain methods. We reproduce several known results with this method. Using transient analysis, we derive bounds on WiFi throughput under the requirement that latency and packet loss must be bounded.