ترغب بنشر مسار تعليمي؟ اضغط هنا

On the linear quadratic problem for systems with time reversed Markov jump parameters and the duality with filtering of Markov jump linear systems

76   0   0.0 ( 0 )
 نشر من قبل Daniel Alexis Gutierrez Pachas
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study a class of systems whose parameters are driven by a Markov chain in reverse time. A recursive characterization for the second moment matrix, a spectral radius test for mean square stability and the formulas for optimal control are given. Our results are determining for the question: is it possible to extend the classical duality between filtering and control of linear systems (whose matrices are transposed in the dual problem) by simply adding the jump variable of a Markov jump linear system. The answer is positive provided the jump process is reversed in time.

قيم البحث

اقرأ أيضاً

In most real cases transition probabilities between operational modes of Markov jump linear systems cannot be computed exactly and are time-varying. We take into account this aspect by considering Markov jump linear systems where the underlying Marko v chain is polytopic and time-inhomogeneous, i.e. its transition probability matrix is varying over time, with variations that are arbitrary within a polytopic set of stochastic matrices. We address and solve for this class of systems the infinite-horizon optimal control problem. In particular, we show that the optimal controller can be obtained from a set of coupled algebraic Riccati equations, and that for mean square stabilizable systems the optimal finite-horizon cost corresponding to the solution to a parsimonious set of coupled difference Riccati equations converges exponentially fast to the optimal infinite-horizon cost related to the set of coupled algebraic Riccati equations. All the presented concepts are illustrated on a numerical example showing the efficiency of the provided solution.
The aim of this paper is to propose a new numerical approximation of the Kalman-Bucy filter for semi-Markov jump linear systems. This approximation is based on the selection of typical trajectories of the driving semi-Markov chain of the process by u sing an optimal quantization technique. The main advantage of this approach is that it makes pre-computations possible. We derive a Lipschitz property for the solution of the Riccati equation and a general result on the convergence of perturbed solutions of semi-Markov switching Riccati equations when the perturbation comes from the driving semi-Markov chain. Based on these results, we prove the convergence of our approximation scheme in a general infinite countable state space framework and derive an error bound in terms of the quantization error and time discretization step. We employ the proposed filter in a magnetic levitation example with markovian failures and compare its performance with both the Kalman-Bucy filter and the Markovian linear minimum mean squares estimator.
This paper describes the structure of solutions to Kolmogorovs equations for nonhomogeneous jump Markov processes and applications of these results to control of jump stochastic systems. These equations were studied by Feller (1940), who clarified in 1945 in the errata to that paper that some of its results covered only nonexplosive Markov processes. We present the results for possibly explosive Markov processes. The paper is based on the invited talk presented by the authors at the International Conference dedicated to the 200th anniversary of the birth of P. L.~Chebyshev.
This paper extends to Continuous-Time Jump Markov Decision Processes (CTJMDP) the classic result for Markov Decision Processes stating that, for a given initial state distribution, for every policy there is a (randomized) Markov policy, which can be defined in a natural way, such that at each time instance the marginal distributions of state-action pairs for these two policies coincide. It is shown in this paper that this equality takes place for a CTJMDP if the corresponding Markov policy defines a nonexplosive jump Markov process. If this Markov process is explosive, then at each time instance the marginal probability, that a state-action pair belongs to a measurable set of state-action pairs, is not greater for the described Markov policy than the same probability for the original policy. These results are used in this paper to prove that for expected discounted total costs and for average costs per unit time, for a given initial state distribution, for each policy for a CTJMDP the described a Markov policy has the same or better performance.
We consider the problem of designing control laws for stochastic jump linear systems where the disturbances are drawn randomly from a finite sample space according to an unknown distribution, which is estimated from a finite sample of i.i.d. observat ions. We adopt a distributionally robust approach to compute a mean-square stabilizing feedback gain with a given probability. The larger the sample size, the less conservative the controller, yet our methodology gives stability guarantees with high probability, for any number of samples. Using tools from statistical learning theory, we estimate confidence regions for the unknown probability distributions (ambiguity sets) which have the shape of total variation balls centered around the empirical distribution. We use these confidence regions in the design of appropriate distributionally robust controllers and show that the associated stability conditions can be cast as a tractable linear matrix inequality (LMI) by using conjugate duality. The resulting design procedure scales gracefully with the size of the probability space and the system dimensions. Through a numerical example, we illustrate the superior sample complexity of the proposed methodology over the stochastic approach.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا