ترغب بنشر مسار تعليمي؟ اضغط هنا

Kalman Filter Tuning with Bayesian Optimization

89   0   0.0 ( 0 )
 نشر من قبل Zhaozhong Chen
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Many state estimation algorithms must be tuned given the state space process and observation models, the process and observation noise parameters must be chosen. Conventional tuning approaches rely on heuristic hand-tuning or gradient-based optimization techniques to minimize a performance cost function. However, the relationship between tuned noise values and estimator performance is highly nonlinear and stochastic. Therefore, the tuning solutions can easily get trapped in local minima, which can lead to poor choices of noise parameters and suboptimal estimator performance. This paper describes how Bayesian Optimization (BO) can overcome these issues. BO poses optimization as a Bayesian search problem for a stochastic ``black box cost function, where the goal is to search the solution space to maximize the probability of improving the current best solution. As such, BO offers a principled approach to optimization-based estimator tuning in the presence of local minima and performance stochasticity. While extended Kalman filters (EKFs) are the main focus of this work, BO can be similarly used to tune other related state space filters. The method presented here uses performance metrics derived from normalized innovation squared (NIS) filter residuals obtained via sensor data, which renders knowledge of ground-truth states unnecessary. The robustness, accuracy, and reliability of BO-based tuning is illustrated on practical nonlinear state estimation problems,losed-loop aero-robotic control.

قيم البحث

اقرأ أيضاً

94 - Lubin Chang 2020
In this paper, the spacecraft attitude estimation problem has been investigated making use of the concept of matrix Lie group. Through formulation of the attitude and gyroscope bias as elements of SE(3), the corresponding extended Kalman filter, term ed as SE(3)-EKF, has been derived. It is shown that the resulting SE(3)-EKF is just the newly-derived geometric extended Kalman filter (GEKF) for spacecraft attitude estimation. This provides a new perspective on the GEKF besides the common frame errors definition. Moreover, the SE(3)-EKF with reference frame attitude error is also derived and the resulting algorithm bears much resemblance to the right invariant EKF.
In this paper, we propose an approach to address the problems with ambiguity in tuning the process and observation noises for a discrete-time linear Kalman filter. Conventional approaches to tuning (e.g. using normalized estimation error squared and covariance minimization) compute empirical measures of filter performance and the parameter are selected manually or selected using some kind of optimization algorithm to maximize these measures of performance. However, there are two challenges with this approach. First, in theory, many of these measures do not guarantee a unique solution due to observability issues. Second, in practice, empirically computed statistical quantities can be very noisy due to a finite number of samples. We propose a method to overcome these limitations. Our method has two main parts to it. The first is to ensure that the tuning problem has a single unique solution. We achieve this by simultaneously tuning the filter over multiple different prediction intervals. Although this yields a unique solution, practical issues (such as sampling noise) mean that it cannot be directly applied. Therefore, we use Bayesian Optimization. This technique handles noisy data and the local minima that it introduces.
Kalman filters are routinely used for many data fusion applications including navigation, tracking, and simultaneous localization and mapping problems. However, significant time and effort is frequently required to tune various Kalman filter model pa rameters, e.g. process noise covariance, pre-whitening filter models for non-white noise, etc. Conventional optimization techniques for tuning can get stuck in poor local minima and can be expensive to implement with real sensor data. To address these issues, a new black box Bayesian optimization strategy is developed for automatically tuning Kalman filters. In this approach, performance is characterized by one of two stochastic objective functions: normalized estimation error squared (NEES) when ground truth state models are available, or the normalized innovation error squared (NIS) when only sensor data is available. By intelligently sampling the parameter space to both learn and exploit a nonparametric Gaussian process surrogate function for the NEES/NIS costs, Bayesian optimization can efficiently identify multiple local minima and provide uncertainty quantification on its results.
This paper focuses on learning a model of system dynamics online while satisfying safety constraints. Our objective is to avoid offline system identification or hand-specified models and allow a system to safely and autonomously estimate and adapt it s own model during operation. Given streaming observations of the system state, we use Bayesian learning to obtain a distribution over the system dynamics. Specifically, we propose a new matrix variate Gaussian process (MVGP) regression approach with an efficient covariance factorization to learn the drift and input gain terms of a nonlinear control-affine system. The MVGP distribution is then used to optimize the system behavior and ensure safety with high probability, by specifying control Lyapunov function (CLF) and control barrier function (CBF) chance constraints. We show that a safe control policy can be synthesized for systems with arbitrary relative degree and probabilistic CLF-CBF constraints by solving a second order cone program (SOCP). Finally, we extend our design to a self-triggering formulation, adaptively determining the time at which a new control input needs to be applied in order to guarantee safety.
We consider a discrete-time linear-quadratic Gaussian control problem in which we minimize a weighted sum of the directed information from the state of the system to the control input and the control cost. The optimal control and sensing policies can be synthesized jointly by solving a semidefinite programming problem. However, the existing solutions typically scale cubic with the horizon length. We leverage the structure in the problem to develop a distributed algorithm that decomposes the synthesis problem into a set of smaller problems, one for each time step. We prove that the algorithm runs in time linear in the horizon length. As an application of the algorithm, we consider a path-planning problem in a state space with obstacles under the presence of stochastic disturbances. The algorithm computes a locally optimal solution that jointly minimizes the perception and control cost while ensuring the safety of the path. The numerical examples show that the algorithm can scale to thousands of horizon length and compute locally optimal solutions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا