ترغب بنشر مسار تعليمي؟ اضغط هنا

In cyber-physical systems such as automobiles, measurement data from sensor nodes should be delivered to other consumer nodes such as actuators in a regular fashion. But, in practical systems over unreliable media such as wireless, it is a significan t challenge to guarantee small enough inter-delivery times for different clients with heterogeneous channel conditions and inter-delivery requirements. In this paper, we design scheduling policies aiming at satisfying the inter-delivery requirements of such clients. We formulate the problem as a risk-sensitive Markov Decision Process (MDP). Although the resulting problem involves an infinite state space, we first prove that there is an equivalent MDP involving only a finite number of states. Then we prove the existence of a stationary optimal policy and establish an algorithm to compute it in a finite number of steps. However, the bane of this and many similar problems is the resulting complexity, and, in an attempt to make fundamental progress, we further propose a new high reliability asymptotic approach. In essence, this approach considers the scenario when the channel failure probabilities for different clients are of the same order, and asymptotically approach zero. We thus proceed to determine the asymptotically optimal policy: in a two-client scenario, we show that the asymptotically optimal policy is a modified least time-to-go policy, which is intuitively appealing and easily implementable; in the general multi-client scenario, we are led to an SN policy, and we develop an algorithm of low computational complexity to obtain it. Simulation results show that the resulting policies perform well even in the pre-asymptotic regime with moderate failure probabilities.
In cyber-physical systems such as in-vehicle wireless sensor networks, a large number of sensor nodes continually generate measurements that should be received by other nodes such as actuators in a regular fashion. Meanwhile, energy-efficiency is als o important in wireless sensor networks. Motivated by these, we develop scheduling policies which are energy efficient and simultaneously maintain regular deliveries of packets. A tradeoff parameter is introduced to balance these two conflicting objectives. We employ a Markov Decision Process (MDP) model where the state of each client is the time-since-last-delivery of its packet, and reduce it into an equivalent finite-state MDP problem. Although this equivalent problem can be solved by standard dynamic programming techniques, it suffers from a high-computational complexity. Thus we further pose the problem as a restless multi-armed bandit problem and employ the low-complexity Whittle Index policy. It is shown that this problem is indexable and the Whittle indexes are derived. Also, we prove the Whittle Index policy is asymptotically optimal and validate its optimality via extensive simulations.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا