No Arabic abstract
Age-of-information is a novel performance metric in communication systems to indicate the freshness of the latest received data, which has wide applications in monitoring and control scenarios. Another important performance metric in these applications is energy consumption, since monitors or sensors are usually energy constrained. In this paper, we study the energy-age tradeoff in a status update system where data transmission from a source to a receiver may encounter failure due to channel error. As the status sensing process consumes energy, when a transmission failure happens, the source may either retransmit the existing data to save energy for sensing, or sense and transmit a new update to minimize age-of-information. A threshold-based retransmission policy is considered where each update is allowed to be transmitted no more than M times. Closed-form average age-of-information and energy consumption is derived and expressed as a function of channel failure probability and maximum number of retransmissions M. Numerical simulations validate our analytical results, and illustrate the tradeoff between average age-of-information and energy consumption.
Age-of-Information (AoI), or simply age, which measures the data freshness, is essential for real-time Internet-of-Things (IoT) applications. On the other hand, energy saving is urgently required by many energy-constrained IoT devices. This paper studies the energy-age tradeoff for status update from a sensor to a monitor over an error-prone channel. The sensor can sleep, sense and transmit a new update, or retransmit by considering both sensing energy and transmit energy. An infinite-horizon average cost problem is formulated as a Markov decision process (MDP) with the objective of minimizing the weighted sum of average AoI and average energy consumption. By solving the associated discounted cost problem and analyzing the Markov chain under the optimal policy, we prove that there exists a threshold optimal stationary policy with only two thresholds, i.e., one threshold on the AoI at the transmitter (AoIT) and the other on the AoI at the receiver (AoIR). Moreover, the two thresholds can be efficiently found by a line search. Numerical results show the performance of the optimal policies and the tradeoff curves with different parameters. Comparisons with the conventional policies show that considering sensing energy is of significant impact on the policy design, and introducing sleep mode greatly expands the tradeoff range.
We consider a communication system in which status updates arrive at a source node, and should be transmitted through a network to the intended destination node. The status updates are samples of a random process under observation, transmitted as packets, which also contain the time stamp to identify when the sample was generated. The age of the information available to the destination node is the time elapsed since the last received update was generated. In this paper, we model the source-destination link using queuing theory, and we assume that the time it takes to successfully transmit a packet to the destination is an exponentially distributed service time. We analyze the age of information in the case that the source node has the capability to manage the arriving samples, possibly discarding packets in order to avoid wasting network resources with the transmission of stale information. In addition to characterizing the average age, we propose a new metric, called peak age, which provides information about the maximum value of the age, achieved immediately before receiving an update.
Timely status updating is crucial for future applications that involve remote monitoring and control, such as autonomous driving and Industrial Internet of Things (IIoT). Age of Information (AoI) has been proposed to measure the freshness of status updates. However, it is incapable of capturing critical systematic context information that indicates the time-varying importance of status information, and the dynamic evolution of status. In this paper, we propose a context-based metric, namely the Urgency of Information (UoI), to evaluate the timeliness of status updates. Compared to AoI, the new metric incorporates both time-varying context information and dynamic status evolution, which enables the analysis on context-based adaptive status update schemes, as well as more effective remote monitoring and control. The minimization of average UoI for a status update terminal with an updating frequency constraint is investigated, and an update-index-based adaptive scheme is proposed. Simulation results show that the proposed scheme achieves a near-optimal performance with a low computational complexity.
This paper analyzes the communication between two energy harvesting wireless sensor nodes. The nodes use automatic repeat request and forward error correction mechanism for the error control. The random nature of available energy and arrivals of harvested energy may induce interruption to the signal sampling and decoding operations. We propose a selective sampling scheme where the length of the transmitted packet to be sampled depends on the available energy at the receiver. The receiver performs the decoding when complete samples of the packet are available. The selective sampling information bits are piggybacked on the automatic repeat request messages for the transmitter use. This way, the receiver node manages more efficiently its energy use. Besides, we present the partially observable Markov decision process formulation, which minimizes the long-term average pairwise error probability and optimizes the transmit power. Optimal and suboptimal power assignment strategies are introduced for retransmissions, which are adapted to the selective sampling and channel state information. With finite battery size and fixed power assignment policy, an analytical expression for the average packet drop probability is derived. Numerical simulations show the performance gain of the proposed scheme with power assignment strategy over the conventional scheme.
In most Internet of Things (IoT) networks, edge nodes are commonly used as to relays to cache sensing data generated by IoT sensors as well as provide communication services for data consumers. However, a critical issue of IoT sensing is that data are usually transient, which necessitates temporal updates of caching content items while frequent cache updates could lead to considerable energy cost and challenge the lifetime of IoT sensors. To address this issue, we adopt the Age of Information (AoI) to quantify data freshness and propose an online cache update scheme to obtain an effective tradeoff between the average AoI and energy cost. Specifically, we first develop a characterization of transmission energy consumption at IoT sensors by incorporating a successful transmission condition. Then, we model cache updating as a Markov decision process to minimize average weighted cost with judicious definitions of state, action, and reward. Since user preference towards content items is usually unknown and often temporally evolving, we therefore develop a deep reinforcement learning (DRL) algorithm to enable intelligent cache updates. Through trial-and-error explorations, an effective caching policy can be learned without requiring exact knowledge of content popularity. Simulation results demonstrate the superiority of the proposed framework.