No Arabic abstract
Age-of-Information (AoI), or simply age, which measures the data freshness, is essential for real-time Internet-of-Things (IoT) applications. On the other hand, energy saving is urgently required by many energy-constrained IoT devices. This paper studies the energy-age tradeoff for status update from a sensor to a monitor over an error-prone channel. The sensor can sleep, sense and transmit a new update, or retransmit by considering both sensing energy and transmit energy. An infinite-horizon average cost problem is formulated as a Markov decision process (MDP) with the objective of minimizing the weighted sum of average AoI and average energy consumption. By solving the associated discounted cost problem and analyzing the Markov chain under the optimal policy, we prove that there exists a threshold optimal stationary policy with only two thresholds, i.e., one threshold on the AoI at the transmitter (AoIT) and the other on the AoI at the receiver (AoIR). Moreover, the two thresholds can be efficiently found by a line search. Numerical results show the performance of the optimal policies and the tradeoff curves with different parameters. Comparisons with the conventional policies show that considering sensing energy is of significant impact on the policy design, and introducing sleep mode greatly expands the tradeoff range.
Age-of-information is a novel performance metric in communication systems to indicate the freshness of the latest received data, which has wide applications in monitoring and control scenarios. Another important performance metric in these applications is energy consumption, since monitors or sensors are usually energy constrained. In this paper, we study the energy-age tradeoff in a status update system where data transmission from a source to a receiver may encounter failure due to channel error. As the status sensing process consumes energy, when a transmission failure happens, the source may either retransmit the existing data to save energy for sensing, or sense and transmit a new update to minimize age-of-information. A threshold-based retransmission policy is considered where each update is allowed to be transmitted no more than M times. Closed-form average age-of-information and energy consumption is derived and expressed as a function of channel failure probability and maximum number of retransmissions M. Numerical simulations validate our analytical results, and illustrate the tradeoff between average age-of-information and energy consumption.
We study in this paper optimal control strategy for Advanced Sleep Modes (ASM) in 5G networks. ASM correspond to different levels of sleep modes ranging from deactivation of some components of the base station for several micro-seconds to switching off of almost all of them for one second or more. ASMs are made possible in 5G networks thanks to the definition of so-called lean carrier radio access which allows for configurable signaling periodicities. We model such a system using Markov Decision Processes (MDP) and find optimal sleep policy in terms of a trade-off between saved power consumption versus additional incurred delay for user traffic which has to wait for the network components to be woken-up and serve it. Eventually, for the system not to oscillate between sleep levels, we add a switching component in the cost function and show its impact on the energy reduction versus delay trade-off.
In most Internet of Things (IoT) networks, edge nodes are commonly used as to relays to cache sensing data generated by IoT sensors as well as provide communication services for data consumers. However, a critical issue of IoT sensing is that data are usually transient, which necessitates temporal updates of caching content items while frequent cache updates could lead to considerable energy cost and challenge the lifetime of IoT sensors. To address this issue, we adopt the Age of Information (AoI) to quantify data freshness and propose an online cache update scheme to obtain an effective tradeoff between the average AoI and energy cost. Specifically, we first develop a characterization of transmission energy consumption at IoT sensors by incorporating a successful transmission condition. Then, we model cache updating as a Markov decision process to minimize average weighted cost with judicious definitions of state, action, and reward. Since user preference towards content items is usually unknown and often temporally evolving, we therefore develop a deep reinforcement learning (DRL) algorithm to enable intelligent cache updates. Through trial-and-error explorations, an effective caching policy can be learned without requiring exact knowledge of content popularity. Simulation results demonstrate the superiority of the proposed framework.
This paper studies the transmit beamforming in a downlink integrated sensing and communication (ISAC) system, where a base station (BS) equipped with a uniform linear array (ULA) sends combined information-bearing and dedicated radar signals to simultaneously perform downlink multiuser communication and radar target sensing. Under this setup, we maximize the radar sensing performance (in terms of minimizing the beampattern matching errors or maximizing the minimum beampattern gains), subject to the communication users minimum signal-to-interference-plus-noise ratio (SINR) requirements and the BSs transmit power constraints. In particular, we consider two types of communication receivers, namely Type-I and Type-II receivers, which do not have and do have the capability of cancelling the interference from the {emph{a-priori}} known dedicated radar signals, respectively. Under both Type-I and Type-II receivers, the beampattern matching and minimum beampattern gain maximization problems are globally optimally solved via applying the semidefinite relaxation (SDR) technique together with the rigorous proof of the tightness of SDR for both Type-I and Type-II receivers under the two design criteria. It is shown that at the optimality, dedicated radar signals are not required with Type-I receivers under some specific conditions, while dedicated radar signals are always needed to enhance the performance with Type-II receivers. Numerical results show that the minimum beampattern gain maximization leads to significantly higher beampattern gains at the worst-case sensing angles with a much lower computational complexity than the beampattern matching design. It is also shown that by exploiting the capability of canceling the interference caused by the radar signals, the case with Type-II receivers results in better sensing performance than that with Type-I receivers and other conventional designs.
Most methods for publishing data with privacy guarantees introduce randomness into datasets which reduces the utility of the published data. In this paper, we study the privacy-utility tradeoff by taking maximal leakage as the privacy measure and the expected Hamming distortion as the utility measure. We study three different but related problems. First, we assume that the data-generating distribution (i.e., the prior) is known, and we find the optimal privacy mechanism that achieves the smallest distortion subject to a constraint on maximal leakage. Then, we assume that the prior belongs to some set of distributions, and we formulate a min-max problem for finding the smallest distortion achievable for the worst-case prior in the set, subject to a maximal leakage constraint. Lastly, we define a partial order on privacy mechanisms based on the largest distortion they generate. Our results show that when the prior distribution is known, the optimal privacy mechanism fully discloses symbols with the largest prior probabilities, and suppresses symbols with the smallest prior probabilities. Furthermore, we show that sets of priors that contain more uniform distributions lead to larger distortion, while privacy mechanisms that distribute the privacy budget more uniformly over the symbols create smaller worst-case distortion.