Model-Predictive Control for Discrete-Time Queueing Networks with Varying Topology


Abstract in English

In this paper, we equip the conventional discrete-time queueing network with a Markovian input process, that, in addition to the usual short-term stochastics, governs the mid- to long-term behavior of the links between the network nodes. This is reminiscent of so-called Jump-Markov systems in control theory and allows the network topology to change over time. We argue that the common back-pressure control policy is inadequate to control such network dynamics and propose a novel control policy inspired by the paradigms of model-predictive control. Specifically, by defining a suitable but arbitrary prediction horizon, our policy takes into account the future network states and possible control actions. This stands in clear contrast to most other policies which are myopic, i.e. only consider the next state. We show numerically that such an approach can significantly improve the control performance and introduce several variants, thereby trading off performance versus computational complexity. In addition, we prove so-called throughput optimality of our policy which guarantees stability for all network flows that can be maintained by the network. Interestingly, in contrast to general stability proofs in model-predictive control, our proof does not require the assumption of a terminal set (i.e. for the prediction horizon to be large enough). Finally, we provide several illustrating examples, one of which being a network of synchronized queues. This one in particular constitutes an interesting system class, in which our policy exerts superiority over general back-pressure policies, that even lose their throughput optimality in those networks.

Download