No Arabic abstract
In most real cases transition probabilities between operational modes of Markov jump linear systems cannot be computed exactly and are time-varying. We take into account this aspect by considering Markov jump linear systems where the underlying Markov chain is polytopic and time-inhomogeneous, i.e. its transition probability matrix is varying over time, with variations that are arbitrary within a polytopic set of stochastic matrices. We address and solve for this class of systems the infinite-horizon optimal control problem. In particular, we show that the optimal controller can be obtained from a set of coupled algebraic Riccati equations, and that for mean square stabilizable systems the optimal finite-horizon cost corresponding to the solution to a parsimonious set of coupled difference Riccati equations converges exponentially fast to the optimal infinite-horizon cost related to the set of coupled algebraic Riccati equations. All the presented concepts are illustrated on a numerical example showing the efficiency of the provided solution.
We study a class of systems whose parameters are driven by a Markov chain in reverse time. A recursive characterization for the second moment matrix, a spectral radius test for mean square stability and the formulas for optimal control are given. Our results are determining for the question: is it possible to extend the classical duality between filtering and control of linear systems (whose matrices are transposed in the dual problem) by simply adding the jump variable of a Markov jump linear system. The answer is positive provided the jump process is reversed in time.
In this paper, we consider a discrete-time stochastic control problem with uncertain initial and target states. We first discuss the connection between optimal transport and stochastic control problems of this form. Next, we formulate a linear-quadratic regulator problem where the initial and terminal states are distributed according to specified probability densities. A closed-form solution for the optimal transport map in the case of linear-time varying systems is derived, along with an algorithm for computing the optimal map. Two numerical examples pertaining to swarm deployment demonstrate the practical applicability of the model, and performance of the numerical method.
The aim of this paper is to propose a new numerical approximation of the Kalman-Bucy filter for semi-Markov jump linear systems. This approximation is based on the selection of typical trajectories of the driving semi-Markov chain of the process by using an optimal quantization technique. The main advantage of this approach is that it makes pre-computations possible. We derive a Lipschitz property for the solution of the Riccati equation and a general result on the convergence of perturbed solutions of semi-Markov switching Riccati equations when the perturbation comes from the driving semi-Markov chain. Based on these results, we prove the convergence of our approximation scheme in a general infinite countable state space framework and derive an error bound in terms of the quantization error and time discretization step. We employ the proposed filter in a magnetic levitation example with markovian failures and compare its performance with both the Kalman-Bucy filter and the Markovian linear minimum mean squares estimator.
We consider the problem of designing control laws for stochastic jump linear systems where the disturbances are drawn randomly from a finite sample space according to an unknown distribution, which is estimated from a finite sample of i.i.d. observations. We adopt a distributionally robust approach to compute a mean-square stabilizing feedback gain with a given probability. The larger the sample size, the less conservative the controller, yet our methodology gives stability guarantees with high probability, for any number of samples. Using tools from statistical learning theory, we estimate confidence regions for the unknown probability distributions (ambiguity sets) which have the shape of total variation balls centered around the empirical distribution. We use these confidence regions in the design of appropriate distributionally robust controllers and show that the associated stability conditions can be cast as a tractable linear matrix inequality (LMI) by using conjugate duality. The resulting design procedure scales gracefully with the size of the probability space and the system dimensions. Through a numerical example, we illustrate the superior sample complexity of the proposed methodology over the stochastic approach.
This work introduces a new abstraction technique for reducing the state space of large, discrete-time labelled Markov chains. The abstraction leverages the semantics of interval Markov decision processes and the existing notion of approximate probabilistic bisimulation. Whilst standard abstractions make use of abstract points that are taken from the state space of the concrete model and which serve as representatives for sets of concrete states, in this work the abstract structure is constructed considering abstract points that are not necessarily selected from the states of the concrete model, rather they are a function of these states. The resulting model presents a smaller one-step bisimulation error, when compared to a like-sized, standard Markov chain abstraction. We outline a method to perform probabilistic model checking, and show that the computational complexity of the new method is comparable to that of standard abstractions based on approximate probabilistic bisimulations.