ترغب بنشر مسار تعليمي؟ اضغط هنا

Approximate Kalman-Bucy filter for continuous-time semi-Markov jump linear systems

123   0   0.0 ( 0 )
 نشر من قبل Beno\\^ite de Saporta
 تاريخ النشر 2014
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The aim of this paper is to propose a new numerical approximation of the Kalman-Bucy filter for semi-Markov jump linear systems. This approximation is based on the selection of typical trajectories of the driving semi-Markov chain of the process by using an optimal quantization technique. The main advantage of this approach is that it makes pre-computations possible. We derive a Lipschitz property for the solution of the Riccati equation and a general result on the convergence of perturbed solutions of semi-Markov switching Riccati equations when the perturbation comes from the driving semi-Markov chain. Based on these results, we prove the convergence of our approximation scheme in a general infinite countable state space framework and derive an error bound in terms of the quantization error and time discretization step. We employ the proposed filter in a magnetic levitation example with markovian failures and compare its performance with both the Kalman-Bucy filter and the Markovian linear minimum mean squares estimator.



قيم البحث

اقرأ أيضاً

This paper extends to Continuous-Time Jump Markov Decision Processes (CTJMDP) the classic result for Markov Decision Processes stating that, for a given initial state distribution, for every policy there is a (randomized) Markov policy, which can be defined in a natural way, such that at each time instance the marginal distributions of state-action pairs for these two policies coincide. It is shown in this paper that this equality takes place for a CTJMDP if the corresponding Markov policy defines a nonexplosive jump Markov process. If this Markov process is explosive, then at each time instance the marginal probability, that a state-action pair belongs to a measurable set of state-action pairs, is not greater for the described Markov policy than the same probability for the original policy. These results are used in this paper to prove that for expected discounted total costs and for average costs per unit time, for a given initial state distribution, for each policy for a CTJMDP the described a Markov policy has the same or better performance.
State estimation is critical to control systems, especially when the states cannot be directly measured. This paper presents an approximate optimal filter, which enables to use policy iteration technique to obtain the steady-state gain in linear Gaus sian time-invariant systems. This design transforms the optimal filtering problem with minimum mean square error into an optimal control problem, called Approximate Optimal Filtering (AOF) problem. The equivalence holds given certain conditions about initial state distributions and policy formats, in which the system state is the estimation error, control input is the filter gain, and control objective function is the accumulated estimation error. We present a policy iteration algorithm to solve the AOF problem in steady-state. A classic vehicle state estimation problem finally evaluates the approximate filter. The results show that the policy converges to the steady-state Kalman gain, and its accuracy is within 2 %.
A set of N independent Gaussian linear time invariant systems is observed by M sensors whose task is to provide the best possible steady-state causal minimum mean square estimate of the state of the systems, in addition to minimizing a steady-state m easurement cost. The sensors can switch between systems instantaneously, and there are additional resource constraints, for example on the number of sensors which can observe a given system simultaneously. We first derive a tractable relaxation of the problem, which provides a bound on the achievable performance. This bound can be computed by solving a convex program involving linear matrix inequalities. Exploiting the additional structure of the sites evolving independently, we can decompose this program into coupled smaller dimensional problems. In the scalar case with identical sensors, we give an analytical expression of an index policy proposed in a more general context by Whittle. In the general case, we develop open-loop periodic switching policies whose performance matches the bound arbitrarily closely.
We study a class of systems whose parameters are driven by a Markov chain in reverse time. A recursive characterization for the second moment matrix, a spectral radius test for mean square stability and the formulas for optimal control are given. Our results are determining for the question: is it possible to extend the classical duality between filtering and control of linear systems (whose matrices are transposed in the dual problem) by simply adding the jump variable of a Markov jump linear system. The answer is positive provided the jump process is reversed in time.
This paper describes the structure of solutions to Kolmogorovs equations for nonhomogeneous jump Markov processes and applications of these results to control of jump stochastic systems. These equations were studied by Feller (1940), who clarified in 1945 in the errata to that paper that some of its results covered only nonexplosive Markov processes. We present the results for possibly explosive Markov processes. The paper is based on the invited talk presented by the authors at the International Conference dedicated to the 200th anniversary of the birth of P. L.~Chebyshev.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا