No Arabic abstract
The effects of quantization and coding on the estimation quality of a Gauss-Markov, namely Ornstein-Uhlenbeck, process are considered. Samples are acquired from the process, quantized, and then encoded for transmission using either infinite incremental redundancy or fixed redundancy coding schemes. A fixed processing time is consumed at the receiver for decoding and sending feedback to the transmitter. Decoded messages are used to construct a minimum mean square error (MMSE) estimate of the process as a function of time. This is shown to be an increasing functional of the age-of-information, defined as the time elapsed since the sampling time pertaining to the latest successfully decoded message. Such (age-penalty) functional depends on the quantization bits, codeword lengths and receiver processing time. The goal, for each coding scheme, is to optimize sampling times such that the long term average MMSE is minimized. This is then characterized in the setting of general increasing age-penalty functionals, not necessarily corresponding to MMSE, which may be of independent interest in other contexts.
The effects of quantization and coding on the estimation quality of Gauss-Markov processes are considered, with a special attention to the Ornstein-Uhlenbeck process. Samples are acquired from the process, quantized, and then encoded for transmission using either infinite incremental redundancy (IIR) or fixed redundancy (FR) coding schemes. A fixed processing time is consumed at the receiver for decoding and sending feedback to the transmitter. Decoded messages are used to construct a minimum mean square error (MMSE) estimate of the process as a function of time. This is shown to be an increasing functional of the age-of-information (AoI), defined as the time elapsed since the sampling time pertaining to the latest successfully decoded message. Such functional depends on the quantization bits, codewords lengths and receiver processing time. The goal, for each coding scheme, is to optimize sampling times such that the long-term average MMSE is minimized. This is then characterized in the setting of general increasing functionals of AoI, not necessarily corresponding to MMSE, which may be of independent interest in other contexts. We first show that the optimal sampling policy for IIR is such that a new sample is generated only if the AoI exceeds a certain threshold, while for FR it is such that a new sample is delivered just-in-time as the receiver finishes processing the previous one. Enhanced transmissions schemes are then developed in order to exploit the processing times to make new data available at the receiver sooner. For both IIR and FR, it is shown that there exists an optimal number of quantization bits that balances AoI and quantization errors, and hence minimizes the MMSE. It is also shown that for longer receiver processing times, the relatively simpler FR scheme outperforms IIR.
We consider two closely related problems: anomaly detection in sensor networks and testing for infections in human populations. In both problems, we have $n$ nodes (sensors, humans), and each node exhibits an event of interest (anomaly, infection) with probability $p$. We want to keep track of the anomaly/infection status of all nodes at a central location. We develop a $group$ $updating$ scheme, akin to group testing, which updates a central location about the status of each member of the population by appropriately grouping their individual status. Unlike group testing, which uses the expected number of tests as a metric, in group updating, we use the expected age of information at the central location as a metric. We determine the optimal group size to minimize the age of information. We show that, when $p$ is small, the proposed group updating policy yields smaller age compared to a sequential updating policy.
We consider a status update system in which the update packets need to be processed to extract the embedded useful information. The source node sends the acquired information to a computation unit (CU) which consists of a master node and $n$ worker nodes. The master node distributes the received computation task to the worker nodes. Upon computation, the master node aggregates the results and sends them back to the source node to keep it emph{updated}. We investigate the age performance of uncoded and coded (repetition coded, MDS coded, and multi-message MDS (MM-MDS) coded) schemes in the presence of stragglers under i.i.d.~exponential transmission delays and i.i.d~shifted exponential computation times. We show that asymptotically MM-MDS coded scheme outperforms the other schemes. Furthermore, we characterize the optimal codes such that the average age is minimized.
We consider a system in which an information source generates independent and identically distributed status update packets from an observed phenomenon that takes $n$ possible values based on a given pmf. These update packets are encoded at the transmitter node to be sent to the receiver node. Instead of encoding all $n$ possible realizations, the transmitter node only encodes the most probable $k$ realizations and disregards whenever a realization from the remaining $n-k$ values occurs. We find the average age and determine the age-optimal real codeword lengths such that the average age at the receiver node is minimized. Through numerical evaluations for arbitrary pmfs, we show that this selective encoding policy results in a lower average age than encoding every realization and find the age-optimal $k$. We also analyze a randomized selective encoding policy in which the remaining $n-k$ realizations are encoded and sent with a certain probability to further inform the receiver at the expense of longer codewords for the selected $k$ realizations.
A status updating system is considered in which data from multiple sources are sampled by an energy harvesting sensor and transmitted to a remote destination through an erasure channel. The goal is to deliver status updates of all sources in a timely manner, such that the cumulative long-term average age-of-information (AoI) is minimized. The AoI for each source is defined as the time elapsed since the generation time of the latest successful status update received at the destination from that source. Transmissions are subject to energy availability, which arrives in units according to a Poisson process, with each energy unit capable of carrying out one transmission from only one source. The sensor is equipped with a unit-sized battery to save the incoming energy. A scheduling policy is designed in order to determine which source is sampled using the available energy. The problem is studied in two main settings: no erasure status feedback, and perfect instantaneous feedback.