No Arabic abstract
Optimization-based state estimation is useful for handling of constrained linear or nonlinear dynamical systems. It has an ideal form, known as full information estimation (FIE) which uses all past measurements to perform state estimation, and also a practical counterpart, known as moving-horizon estimation (MHE) which uses most recent measurements of a limited length to perform the estimation. Due to the theoretical ideal, conditions for robust stability of FIE are relatively easier to establish than those for MHE, and various sufficient conditions have been developed in literature. This work reveals a generic link from robust stability of FIE to that of MHE, showing that the former implies at least a weaker robust stability of MHE which implements a long enough horizon. The implication strengthens to strict robust stability of MHE if the system satisfies a mild Lipschitz continuity or equivalently a robust exponential stability condition. The revealed implications are then applied to derive new sufficient conditions for robust stability of MHE, which further reveal an intrinsic relation between the existence of a robustly stable FIE/MHE and the system being incrementally input/output-to-state stable.
Estimating and reacting to external disturbances is of fundamental importance for robust control of quadrotors. Existing estimators typically require significant tuning or training with a large amount of data, including the ground truth, to achieve satisfactory performance. This paper proposes a data-efficient differentiable moving horizon estimation (DMHE) algorithm that can automatically tune the MHE parameters online and also adapt to different scenarios. We achieve this by deriving the analytical gradient of the estimated trajectory from MHE with respect to the tuning parameters, enabling end-to-end learning for auto-tuning. Most interestingly, we show that the gradient can be calculated efficiently from a Kalman filter in a recursive form. Moreover, we develop a model-based policy gradient algorithm to learn the parameters directly from the trajectory tracking errors without the need for the ground truth. The proposed DMHE can be further embedded as a layer with other neural networks for joint optimization. Finally, we demonstrate the effectiveness of the proposed method via both simulation and experiments on quadrotors, where challenging scenarios such as sudden payload change and flying in downwash are examined.
The paper deals with state estimation of a spatially distributed system given noisy measurements from pointwise-in-time-and-space threshold sensors spread over the spatial domain of interest. A Maximum A posteriori Probability (MAP) approach is undertaken and a Moving Horizon (MH) approximation of the MAP cost-function is adopted. It is proved that, under system linearity and log-concavity of the noise probability density functions, the proposed MH-MAP state estimator amounts to the solution, at each sampling interval, of a convex optimization problem. Moreover, a suitable centralized solution for large-scale systems is proposed with a substantial decrease of the computational complexity. The latter algorithm is shown to be feasible for the state estimation of spatially-dependent dynamic fields described by Partial Differential Equations (PDE) via the use of the Finite Element (FE) spatial discretization method. A simulation case-study concerning estimation of a diffusion field is presented in order to demonstrate the effectiveness of the proposed approach. Quite remarkably, the numerical tests exhibit a noise-assisted behavior of the proposed approach in that the estimation accuracy results optimal in the presence of measurement noise with non-null variance.
Optimization-based state estimation is useful for nonlinear or constrained dynamic systems for which few general methods with established properties are available. The two fundamental forms are moving horizon estimation (MHE) which uses the nearest measurements within a moving time horizon, and its theoretical ideal, full information estimation (FIE) which uses all measurements up to the time of estimation. Despite extensive studies, the stability analyses of FIE and MHE for discrete-time nonlinear systems with bounded process and measurement disturbances, remain an open challenge. This work aims to provide a systematic solution for the challenge. First, we prove that FIE is robustly globally asymptotically stable (RGAS) if the cost function admits a property mimicking the incremental input/output-to-state stability (i-IOSS) of the system and has a sufficient sensitivity to the uncertainty in the initial state. Second, we establish an explicit link from the RGAS of FIE to that of MHE, and use it to show that MHE is RGAS under enhanced conditions if the moving horizon is long enough to suppress the propagation of uncertainties. The theoretical results imply flexible MHE designs with assured robust stability for a broad class of i-IOSS systems. Numerical experiments on linear and nonlinear systems are used to illustrate the designs and support the findings.
This article presents an up-to-date tutorial review of nonlinear Bayesian estimation. State estimation for nonlinear systems has been a challenge encountered in a wide range of engineering fields, attracting decades of research effort. To date, one of the most promising and popular approaches is to view and address the problem from a Bayesian probabilistic perspective, which enables estimation of the unknown state variables by tracking their probabilistic distribution or statistics (e.g., mean and covariance) conditioned on the systems measurement data. This article offers a systematic introduction of the Bayesian state estimation framework and reviews various Kalman filtering (KF) techniques, progressively from the standard KF for linear systems to extended KF, unscented KF and ensemble KF for nonlinear systems. It also overviews other prominent or emerging Bayesian estimation methods including the Gaussian filtering, Gaussian-sum filtering, particle filtering and moving horizon estimation and extends the discussion of state estimation forward to more complicated problems such as simultaneous state and parameter/input estimation.
We point out a limitation of the mutual information neural estimation (MINE) where the network fails to learn at the initial training phase, leading to slow convergence in the number of training iterations. To solve this problem, we propose a faster method called the mutual information neural entropic estimation (MI-NEE). Our solution first generalizes MINE to estimate the entropy using a custom reference distribution. The entropy estimate can then be used to estimate the mutual information. We argue that the seemingly redundant intermediate step of entropy estimation allows one to improve the convergence by an appropriate reference distribution. In particular, we show that MI-NEE reduces to MINE in the special case when the reference distribution is the product of marginal distributions, but faster convergence is possible by choosing the uniform distribution as the reference distribution instead. Compared to the product of marginals, the uniform distribution introduces more samples in low-density regions and fewer samples in high-density regions, which appear to lead to an overall larger gradient for faster convergence.