Do you want to publish a course? Click here

Robust Stability of Optimization-based State Estimation

88   0   0.0 ( 0 )
 Added by Wuhua Hu
 Publication date 2017
  fields
and research's language is English
 Authors Wuhua Hu




Ask ChatGPT about the research

Optimization-based state estimation is useful for nonlinear or constrained dynamic systems for which few general methods with established properties are available. The two fundamental forms are moving horizon estimation (MHE) which uses the nearest measurements within a moving time horizon, and its theoretical ideal, full information estimation (FIE) which uses all measurements up to the time of estimation. Despite extensive studies, the stability analyses of FIE and MHE for discrete-time nonlinear systems with bounded process and measurement disturbances, remain an open challenge. This work aims to provide a systematic solution for the challenge. First, we prove that FIE is robustly globally asymptotically stable (RGAS) if the cost function admits a property mimicking the incremental input/output-to-state stability (i-IOSS) of the system and has a sufficient sensitivity to the uncertainty in the initial state. Second, we establish an explicit link from the RGAS of FIE to that of MHE, and use it to show that MHE is RGAS under enhanced conditions if the moving horizon is long enough to suppress the propagation of uncertainties. The theoretical results imply flexible MHE designs with assured robust stability for a broad class of i-IOSS systems. Numerical experiments on linear and nonlinear systems are used to illustrate the designs and support the findings.



rate research

Read More

In this paper, we study the norm-based robust (efficient) solutions of a Vector Optimization Problem (VOP). We define two kinds of non-ascent directions in terms of Clarkes generalized gradient and characterize norm-based robustness by means of the newly-defined directions. This is done under a basic Constraint Qualification (CQ). We extend the provided characterization to VOPs with conic constraints. Moreover, we derive a necessary condition for norm-based robustness utilizing a nonsmooth gap function.
Legged robots, specifically quadrupeds, are becoming increasingly attractive for industrial applications such as inspection. However, to leave the laboratory and to become useful to an end user requires reliability in harsh conditions. From the perspective of state estimation, it is essential to be able to accurately estimate the robots state despite challenges such as uneven or slippery terrain, textureless and reflective scenes, as well as dynamic camera occlusions. We are motivated to reduce the dependency on foot contact classifications, which fail when slipping, and to reduce position drift during dynamic motions such as trotting. To this end, we present a factor graph optimization method for state estimation which tightly fuses and smooths inertial navigation, leg odometry and visual odometry. The effectiveness of the approach is demonstrated using the ANYmal quadruped robot navigating in a realistic outdoor industrial environment. This experiment included trotting, walking, crossing obstacles and ascending a staircase. The proposed approach decreased the relative position error by up to 55% and absolute position error by 76% compared to kinematic-inertial odometry.
491 - Xiaojun Zhou 2021
In this paper, a novel multiagent based state transition optimization algorithm with linear convergence rate named MASTA is constructed. It first generates an initial population randomly and uniformly. Then, it applies the basic state transition algorithm (STA) to the population and generates a new population. After that, It computes the fitness values of all individuals and finds the best individuals in the new population. Moreover, it performs an effective communication operation and updates the population. With the above iterative process, the best optimal solution is found out. Experimental results based on some common benchmark functions and comparison with some stat-of-the-art optimization algorithms, the proposed MASTA algorithm has shown very superior and comparable performance.
This paper presents a novel scalable framework to solve the optimization of a nonlinear system with differential algebraic equation (DAE) constraints that enforce the asymptotic stability of the underlying dynamic model with respect to certain disturbances. Existing solution approaches to analogous DAE-constrained problems are based on discretization of DAE system into a large set of nonlinear algebraic equations representing the time-marching schemes. These approaches are not scalable to large size models. The proposed framework, based on LaSalles invariance principle, uses convex Lyapunov functions to develop a novel stability certificate which consists of a limited number of algebraic constraints. We develop specific algorithms for two major types of nonlinearities, namely Lure, and quasi-polynomial systems. Quadratic and convex-sum-of-square Lyapunov functions are constructed for the Lure-type and quasi-polynomial systems respectively. A numerical experiment is performed on a 3-generator power network to obtain a solution for transient-stability-constrained optimal power flow.
We propose kernel distributionally robust optimization (Kernel DRO) using insights from the robust optimization theory and functional analysis. Our method uses reproducing kernel Hilbert spaces (RKHS) to construct a wide range of convex ambiguity sets, which can be generalized to sets based on integral probability metrics and finite-order moment bounds. This perspective unifies multiple existing robust and stochastic optimization methods. We prove a theorem that generalizes the classical duality in the mathematical problem of moments. Enabled by this theorem, we reformulate the maximization with respect to measures in DRO into the dual program that searches for RKHS functions. Using universal RKHSs, the theorem applies to a broad class of loss functions, lifting common limitations such as polynomial losses and knowledge of the Lipschitz constant. We then establish a connection between DRO and stochastic optimization with expectation constraints. Finally, we propose practical algorithms based on both batch convex solvers and stochastic functional gradient, which apply to general optimization and machine learning tasks.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا