No Arabic abstract
In this work, we investigate information freshness in a status update communication system consisting of a source-destination link. Initially, we study the properties of a sample path of the age of information (AoI) process at the destination. We obtain a general formula of the stationary distribution of the AoI, under the assumption of ergodicity. We relate this result to a discrete time queueing system and provide a general expression of the generating function of AoI in relation with the system time and the peak age of information (PAoI) metric. Furthermore, we consider three different single-server system models and we obtain closed-form expressions of the generating functions and the stationary distributions of the AoI and the PAoI. The first model is a first-come-first-served (FCFS) queue, the second model is a preemptive last-come-first-served (LCFS) queue, and the last model is a bufferless system with packet dropping. We build upon these results to provide a methodology for analyzing general non-linear age functions for this type of systems, using representations of functions as power series.
We study the multi-user scheduling problem for minimizing the Age of Information (AoI) in cellular wireless networks under stationary and non-stationary regimes. We derive fundamental lower bounds for the scheduling problem and design efficient online policies with provable performance guarantees. In the stationary setting, we consider the AoI optimization problem for a set of mobile users travelling around multiple cells. In this setting, we propose a scheduling policy and show that it is $2$-optimal. Next, we propose a new adversarial channel model for studying the scheduling problem in non-stationary environments. For $N$ users, we show that the competitive ratio of any online scheduling policy in this setting is at least $Omega(N)$. We then propose an online policy and show that it achieves a competitive ratio of $O(N^2)$. Finally, we introduce a relaxed adversarial model with channel state estimations for the immediate future. We propose a heuristic model predictive control policy that exploits this feature and compare its performance through numerical simulations.
There is a growing interest in analysing the freshness of data in networked systems. Age of Information (AoI) has emerged as a popular metric to quantify this freshness at a given destination. There has been a significant research effort in optimizing this metric in communication and networking systems under different settings. In contrast to previous works, we are interested in a fundamental question, what is the minimum achievable AoI in any single-server-single-source queuing system for a given service-time distribution? To address this question, we study a problem of optimizing AoI under service preemptions. Our main result is on the characterization of the minimum achievable average peak AoI (PAoI). We obtain this result by showing that a fixed-threshold policy is optimal in the set of all randomized-threshold causal policies. We use the characterization to provide necessary and sufficient conditions for the service-time distributions under which preemptions are beneficial.
While age of Information (AoI) has gained importance as a metric characterizing the fresh-ness of information in information-update systems and time-critical applications, most previous studies on AoI have been theoretical. In this chapter, we compile a set of recent works reporting API measurements in real-life networks and experimental testbeds, and investigating practical issues such as synchronization, the role of various transport layer protocols, congestion control mechanisms, application of machine learning for adaptation to network conditions, and device related bottlenecks such as limited processing power.
More and more emerging Internet of Things (IoT) applications involve status updates, where various IoT devices monitor certain physical processes and report their latest statuses to the relevant information fusion nodes. A new performance measure, termed the age of information (AoI), has recently been proposed to quantify the information freshness in time-critical IoT applications. Due to a large number of devices in future IoT networks, the decentralized channel access protocols (e.g. random access) are preferable thanks to their low network overhead. Built on the AoI concept, some recent efforts have developed several AoI-oriented ALOHA-like random access protocols for boosting the network-wide information freshness. However, all relevant works focused on theoretical designs and analysis. The development and implementation of a working prototype to evaluate and further improve these random access protocols in practice have been largely overlooked. Motivated as such, we build a software-defined radio (SDR) prototype for testing and comparing the performance of recently proposed AoI-oriented random access protocols. To this end, we implement a time-slotted wireless system by devising a simple yet effective over-the-air time synchronization scheme, in which beacons that serve as reference timing packets are broadcast by an access point from time to time. For a complete working prototype, we also design the frame structures of various packets exchanged within the system. Finally, we design a set of experiments, implement them on our prototype and test the considered algorithms in an office environment.
In this paper, we study large population multi-agent reinforcement learning (RL) in the context of discrete-time linear-quadratic mean-field games (LQ-MFGs). Our setting differs from most existing work on RL for MFGs, in that we consider a non-stationary MFG over an infinite horizon. We propose an actor-critic algorithm to iteratively compute the mean-field equilibrium (MFE) of the LQ-MFG. There are two primary challenges: i) the non-stationarity of the MFG induces a linear-quadratic tracking problem, which requires solving a backwards-in-time (non-causal) equation that cannot be solved by standard (causal) RL algorithms; ii) Many RL algorithms assume that the states are sampled from the stationary distribution of a Markov chain (MC), that is, the chain is already mixed, an assumption that is not satisfied for real data sources. We first identify that the mean-field trajectory follows linear dynamics, allowing the problem to be reformulated as a linear quadratic Gaussian problem. Under this reformulation, we propose an actor-critic algorithm that allows samples to be drawn from an unmixed MC. Finite-sample convergence guarantees for the algorithm are then provided. To characterize the performance of our algorithm in multi-agent RL, we have developed an error bound with respect to the Nash equilibrium of the finite-population game.