ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal Status Updating with a Finite-Battery Energy Harvesting Source

109   0   0.0 ( 0 )
 نشر من قبل Baran Bacinoglu Tan
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider an energy harvesting source equipped with a finite battery, which needs to send timely status updates to a remote destination. The timeliness of status updates is measured by a non-decreasing penalty function of the Age of Information (AoI). The problem is to find a policy for generating updates that achieves the lowest possible time-average expected age penalty among all online policies. We prove that one optimal solution of this problem is a monotone threshold policy, which satisfies (i) each new update is sent out only when the age is higher than a threshold and (ii) the threshold is a non-increasing function of the instantaneous battery level. Let $tau_B$ denote the optimal threshold corresponding to the full battery level $B$, and $p(cdot)$ denote the age-penalty function, then we can show that $p(tau_B)$ is equal to the optimum objective value, i.e., the minimum achievable time-average expected age penalty. These structural properties are used to develop an algorithm to compute the optimal thresholds. Our numerical analysis indicates that the improvement in average age with added battery capacity is largest at small battery sizes; specifically, more than half the total possible reduction in age is attained when battery storage increases from one transmissions worth of energy to two. This encourages further study of status update policies for sensors with small battery storage.



قيم البحث

اقرأ أيضاً

A status updating system is considered in which data from multiple sources are sampled by an energy harvesting sensor and transmitted to a remote destination through an erasure channel. The goal is to deliver status updates of all sources in a timely manner, such that the cumulative long-term average age-of-information (AoI) is minimized. The AoI for each source is defined as the time elapsed since the generation time of the latest successful status update received at the destination from that source. Transmissions are subject to energy availability, which arrives in units according to a Poisson process, with each energy unit capable of carrying out one transmission from only one source. The sensor is equipped with a unit-sized battery to save the incoming energy. A scheduling policy is designed in order to determine which source is sampled using the available energy. The problem is studied in two main settings: no erasure status feedback, and perfect instantaneous feedback.
In this work, we derive optimal transmission policies in an energy harvesting status update system. The system monitors a stochastic process which can be either in a normal or in an alarm state of operation. We capture the freshness of status updates for each state of the stochastic process by introducing two Age of Information (AoI) variables and extend the definition of AoI to account for the state changes of the stochastic process. We formulate the problem at hand as a Markov Decision Process which, under the assumption that the demand for status updates is higher when the stochastic process is in the alarm state, utilizes a transition cost function that applies linear and non-linear penalties based on AoI and the state of the stochastic process. Finally, we evaluate numerically the derived policies and illustrate their effectiveness for reserving energy in anticipation of future alarm states.
Caching has been regarded as a promising technique to alleviate energy consumption of sensors in Internet of Things (IoT) networks by responding to users requests with the data packets stored in the edge caching node (ECN). For real-time applications in caching enabled IoT networks, it is essential to develop dynamic status update strategies to strike a balance between the information freshness experienced by users and energy consumed by the sensor, which, however, is not well addressed. In this paper, we first depict the evolution of information freshness, in terms of age of information (AoI), at each user. Then, we formulate a dynamic status update optimization problem to minimize the expectation of a long term accumulative cost, which jointly considers the users AoI and sensors energy consumption. To solve this problem, a Markov Decision Process (MDP) is formulated to cast the status updating procedure, and a model-free reinforcement learning algorithm is proposed, with which the challenge brought by the unknown of the formulated MDPs dynamics can be addressed. Finally, simulations are conducted to validate the convergence of our proposed algorithm and its effectiveness compared with the zero-wait baseline policy.
We address the problem of how to optimally schedule data packets over an unreliable channel in order to minimize the estimation error of a simple-to-implement remote linear estimator using a constant Kalman gain to track the state of a Gauss Markov p rocess. The remote estimator receives time-stamped data packets which contain noisy observations of the process. Additionally, they also contain the information about the quality of the sensor source, i.e., the variance of the observation noise that was used to generate the packet. In order to minimize the estimation error, the scheduler needs to use both while prioritizing packet transmissions. It is shown that a simple index rule that calculates the value of information (VoI) of each packet, and then schedules the packet with the largest current value of VoI, is optimal. The VoI of a packet decreases with its age, and increases with the precision of the source. Thus, we conclude that, for constant filter gains, a policy which minimizes the age of information does not necessarily maximize the estimator performance.
In this paper, we investigate the performance of simultaneous wireless information and power transfer (SWIPT) in a point-to-point system, adopting practical $M$-ary modulation. We take into account the fact that the receivers radio-frequency (RF) ene rgy harvesting circuit can only harvest energy when the received signal power is greater than a certain sensitivity level. For both power-splitting (PS) and time-switching (TS) schemes, we derive the energy harvesting performance as well as the information decoding performance for the Nakagami-$m$ fading channel. We also analyze the performance tradeoff between energy harvesting and information decoding by studying an optimization problem, which maximizes the information decoding performance and satisfies a constraint on the minimum harvested energy. Our analysis shows that (i) for the PS scheme, modulations with high peak-to-average power ratio achieve better energy harvesting performance, (ii) for the TS scheme, it is desirable to concentrate the power for wireless power transfer in order to minimize the non-harvested energy caused by the RF energy harvesting sensitivity level, and (iii) channel fading is beneficial for energy harvesting in both PS and TS schemes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا