No Arabic abstract
This paper considers the problem of sensory data scheduling of multiple processes. There are $n$ independent linear time-invariant processes and a remote estimator monitoring all the processes. Each process is measured by a sensor, which sends its local state estimate to the remote estimator. The sizes of the packets are different due to different dimensions of each process, and thus it may take different lengths of time steps for the sensors to send their data. Because of bandwidth limitation, only a portion of all the sensors are allowed to transmit. Our goal is to minimize the average of estimation error covariance of the whole system at the remote estimator. The problem is formulated as a Markov decision process (MDP) with average cost over an infinite time horizon. We prove the existence of a deterministic and stationary policy for the problem. We also find that the optimal policy has a consistent behavior and threshold type structure. A numerical example is provided to illustrate our main results.
This work considers the sensor scheduling for multiple dynamic processes. We consider $n$ linear dynamic processes, the state of each process is measured by a sensor, which transmits their local state estimates over wireless channels to a remote estimator with certain communication costs. In each time step, only a portion of the sensors is allowed to transmit data to the remote estimator and the packet might be lost due to unreliability of the wireless channels. Our goal is to find a scheduling policy which coordinates the sensors in a centralized manner to minimize the total expected estimation error of the remote estimator and the communication costs. We formulate the problem as a Markov decision process. We develop an algorithm to check whether there exists a deterministic stationary optimal policy. We show the optimality of monotone policies, which saves computational effort of finding an optimal policy and facilitates practical implementation. Nevertheless, obtaining an exact optimal policy still suffers from curse of dimensionality when the number of processes are large. We further provide an index-based heuristics to avoid brute force computation. Numerical examples are presented to illustrate our theoretical results.
This paper considers a remote state estimation problem with multiple sensors observing a dynamical process, where sensors transmit local state estimates over an independent and identically distributed (i.i.d.) packet dropping channel to a remote estimator. At every discrete time instant, the remote estimator decides whether each sensor should transmit or not, with each sensor transmission incurring a fixed energy cost. The channel is shared such that collisions will occur if more than one sensor transmits at a time. Performance is quantified via an optimization problem that minimizes a convex combination of the expected estimation error covariance at the remote estimator and expected energy usage across the sensors. For transmission schedules dependent only on the estimation error covariance at the remote estimator, this work establishes structural results on the optimal scheduling which show that 1) for unstable systems, if the error covariance is large then a sensor will always be scheduled to transmit, and 2) there is a threshold-type behaviour in switching from one sensor transmitting to another. Specializing to the single sensor case, these structural results demonstrate that a threshold policy (i.e. transmit if the error covariance exceeds a certain threshold and dont transmit otherwise) is optimal. We also consider the situation where sensors transmit measurements instead of state estimates, and establish structural results including the optimality of threshold policies for the single sensor, scalar case. These results provide a theoretical justification for the use of such threshold policies in variance based event triggered estimation. Numerical studies confirm the qualitative behaviour predicted by our structural results. An extension of the structural results to Markovian packet drops is also outlined.
A common situation occurring when dealing with multimedia traffic is having large data frames fragmented into smaller IP packets, and having these packets sent independently through the network. For real-time multimedia traffic, dropping even few packets of a frame may render the entire frame useless. Such traffic is usually modeled as having {em inter-packet dependencies}. We study the problem of scheduling traffic with such dependencies, where each packet has a deadline by which it should arrive at its destination. Such deadlines are common for real-time multimedia applications, and are derived from stringent delay constraints posed by the application. The figure of merit in such environments is maximizing the systems {em goodput}, namely, the number of frames successfully delivered. We study online algorithms for the problem of maximizing goodput of delay-bounded traffic with inter-packet dependencies, and use competitive analysis to evaluate their performance. We present competitive algorithms for the problem, as well as matching lower bounds that are tight up to a constant factor. We further present the results of a simulation study which further validates our algorithmic approach and shows that insights arising from our analysis are indeed manifested in practice.
We present an algorithm for controlling and scheduling multiple linear time-invariant processes on a shared bandwidth limited communication network using adaptive sampling intervals. The controller is centralized and computes at every sampling instant not only the new control command for a process, but also decides the time interval to wait until taking the next sample. The approach relies on model predictive control ideas, where the cost function penalizes the state and control effort as well as the time interval until the next sample is taken. The latter is introduced in order to generate an adaptive sampling scheme for the overall system such that the sampling time increases as the norm of the system state goes to zero. The paper presents a method for synthesizing such a predictive controller and gives explicit sufficient conditions for when it is stabilizing. Further explicit conditions are given which guarantee conflict free transmissions on the network. It is shown that the optimization problem may be solved off-line and that the controller can be implemented as a lookup table of state feedback gains. Simulation studies which compare the proposed algorithm to periodic sampling illustrate potential performance gains.
In this paper I investigate several offline and online data transfer scheduling problems and propose efficient algorithms and techniques for addressing them. In the offline case, I present a novel, heuristic, algorithm for scheduling files with divisible sizes on multiple disjoint paths, in order to maximize the total profit (the problem is equivalent to the multiple knapsack problem with divisible item sizes). I then consider a cost optimization problem for transferring a sequence of identical files, subject to time constraints imposed by the data transfer providers. For the online case I propose an algorithmic framework based on the block partitioning method, which can speed up the process of resource allocation and reservation.