No Arabic abstract
In this short paper, we consider the problem of designing a near-optimal competitive scheduling policy for $N$ mobile users, to maximize the freshness of available information uniformly across all users. Prompted by the unreliability and non-stationarity of the emerging 5G-mmWave channels for high-speed users, we forego of any statistical assumptions of the wireless channels and user-mobility. Instead, we allow the channel states and the mobility patterns to be dictated by an omniscient adversary. It is not difficult to see that no competitive scheduling policy can exist for the corresponding throughput-maximization problem in this adversarial model. Surprisingly, we show that there exists a simple online distributed scheduling policy with a finite competitive ratio for maximizing the freshness of information in this adversarial model. Moreover, we also prove that the proposed policy is competitively optimal up to an $O(ln N)$ factor.
We study a multi-user downlink scheduling problem for optimizing the freshness of information available to users roaming across multiple cells. We consider both adversarial and stochastic settings and design scheduling policies that optimize two distinct information freshness metrics, namely the average age-of-information and the peak age-of-information. We show that a natural greedy scheduling policy is competitive with the optimal offline policy in the adversarial setting. We also derive fundamental lower bounds to the competitive ratio achievable by any online policy. In the stochastic environment, we show that a Max-Weight scheduling policy that takes into account the channel statistics achieves an approximation factor of $2$ for minimizing the average age of information in two extreme mobility scenarios. We conclude the paper by establishing a large-deviation optimality result achieved by the greedy policy for minimizing the peak age of information for static users situated at a single cell.
We design a new scheduling policy to minimize the general non-decreasing cost function of age of information (AoI) in a multiuser system. In this system, the base station stochastically generates time-sensitive packets and transmits them to corresponding user equipments via an unreliable channel. We first formulate the transmission scheduling problem as an average cost constrained Markov decision process problem. Through introducing the service charge, we derive the closed-form expression for the Whittle index, based on which we design the scheduling policy. Using numerical results, we demonstrate the performance gain of our designed scheduling policy compared to the existing policies, such as the optimal policy, the on-demand Whittle index policy, and the age greedy policy.
Unmanned aerial vehicles (UAVs) are expected to be a key component of the next-generation wireless systems. Due to their deployment flexibility, UAVs are being considered as an efficient solution for collecting information data from ground nodes and transmitting it wirelessly to the network. In this paper, a UAV-assisted wireless network is studied, in which energy-constrained ground nodes are deployed to observe different physical processes. In this network, a UAV that has a time constraint for its operation due to its limited battery, moves towards the ground nodes to receive status update packets about their observed processes. The flight trajectory of the UAV and scheduling of status update packets are jointly optimized with the objective of achieving the minimum weighted sum for the age-of-information (AoI) values of different processes at the UAV, referred to as weighted sum-AoI. The problem is modeled as a finite-horizon Markov decision process (MDP) with finite state and action spaces. Since the state space is extremely large, a deep reinforcement learning (RL) algorithm is proposed to obtain the optimal policy that minimizes the weighted sum-AoI, referred to as the age-optimal policy. Several simulation scenarios are considered to showcase the convergence of the proposed deep RL algorithm. Moreover, the results also demonstrate that the proposed deep RL approach can significantly improve the achievable sum-AoI per process compared to the baseline policies, such as the distance-based and random walk policies. The impact of various system design parameters on the optimal achievable sum-AoI per process is also shown through extensive simulations.
We study the multi-user scheduling problem for minimizing the Age of Information (AoI) in cellular wireless networks under stationary and non-stationary regimes. We derive fundamental lower bounds for the scheduling problem and design efficient online policies with provable performance guarantees. In the stationary setting, we consider the AoI optimization problem for a set of mobile users travelling around multiple cells. In this setting, we propose a scheduling policy and show that it is $2$-optimal. Next, we propose a new adversarial channel model for studying the scheduling problem in non-stationary environments. For $N$ users, we show that the competitive ratio of any online scheduling policy in this setting is at least $Omega(N)$. We then propose an online policy and show that it achieves a competitive ratio of $O(N^2)$. Finally, we introduce a relaxed adversarial model with channel state estimations for the immediate future. We propose a heuristic model predictive control policy that exploits this feature and compare its performance through numerical simulations.
Age of Incorrect Information (AoII) is a newly introduced performance metric that considers communication goals. Therefore, comparing with traditional performance metrics and the recently introduced metric - Age of Information (AoI), AoII achieves better performance in many real-life applications. However, the fundamental nature of AoII has been elusive so far. In this paper, we consider the AoII in a system where a transmitter sends updates about a multi-state Markovian source to a remote receiver through an unreliable channel. The communication goal is to minimize AoII subject to a power constraint. We cast the problem into a Constrained Markov Decision Process (CMDP) and prove that the optimal policy is a mixture of two deterministic threshold policies. Afterward, by leveraging the notion of Relative Value Iteration (RVI) and the structural properties of threshold policy, we propose an efficient algorithm to find the threshold policies as well as the mixing coefficient. Lastly, numerical results are laid out to highlight the performance of AoII-optimal policy.