In this paper, we present an approach for designing feedback controllers for polynomial systems that maximize the size of the time-limited backwards reachable set (BRS). We rely on the notion of occupation measures to pose the synthesis problem as an infinite dimensional linear program (LP) and provide finite dimensional approximations of this LP in terms of semidefinite programs (SDPs). The solution to each SDP yields a polynomial control policy and an outer approximation of the largest achievable BRS. In contrast to traditional Lyapunov based approaches which are non-convex and require feasible initialization, our approach is convex and does not require any form of initialization. The resulting time-varying controllers and approximated reachable sets are well-suited for use in a trajectory library or feedback motion planning algorithm. We demonstrate the efficacy and scalability of our approach on five nonlinear systems.
We address the problem of designing optimal linear time-invariant (LTI) sparse controllers for LTI systems, which corresponds to minimizing a norm of the closed-loop system subject to sparsity constraints on the controller structure. This problem is NP-hard in general and motivates the development of tractable approximations. We characterize a class of convex restrictions based on a new notion of Sparsity Invariance (SI). The underlying idea of SI is to design sparsity patterns for transfer matrices Y(s) and X(s) such that any corresponding controller K(s)=Y(s)X(s)^-1 exhibits the desired sparsity pattern. For sparsity constraints, the approach of SI goes beyond the notion of Quadratic Invariance (QI): 1) the SI approach always yields a convex restriction; 2) the solution via the SI approach is guaranteed to be globally optimal when QI holds and performs at least as well as considering a nearest QI subset. Moreover, the notion of SI naturally applies to designing structured static controllers, while QI is not utilizable. Numerical examples show that even for non-QI cases, SI can recover solutions that are 1) globally optimal and 2) strictly more performing than previous methods.
This paper presents a provably correct method for robot navigation in 2D environments cluttered with familiar but unexpected non-convex, star-shaped obstacles as well as completely unknown, convex obstacles. We presuppose a limited range onboard sensor, capable of recognizing, localizing and (leveraging ideas from constructive solid geometry) generating online from its catalogue of the familiar, non-convex shapes an implicit representation of each one. These representations underlie an online change of coordinates to a completely convex model planning space wherein a previously developed online construction yields a provably correct reactive controller that is pulled back to the physically sensed representation to generate the actual robot commands. We extend the construction to differential drive robots, and suggest the empirical utility of the proposed control architecture using both formal proofs and numerical simulations.
We propose a convex optimization procedure for black-box identification of nonlinear state-space models for systems that exhibit stable limit cycles (unforced periodic solutions). It extends the robust identification error framework in which a convex upper bound on simulation error is optimized to fit rational polynomial models with a strong stability guarantee. In this work, we relax the stability constraint using the concepts of transverse dynamics and orbital stability, thus allowing systems with autonomous oscillations to be identified. The resulting optimization problem is convex, and can be formulated as a semidefinite program. A simulation-error bound is proved without assuming that the true system is in the model class, or that the number of measurements goes to infinity. Conditions which guarantee existence of a unique limit cycle of the model are proved and related to the model class that we search over. The method is illustrated by identifying a high-fidelity model from experimental recordings of a live rat hippocampal neuron in culture.
This paper introduces new techniques for using convex optimization to fit input-output data to a class of stable nonlinear dynamical models. We present an algorithm that guarantees consistent estimates of models in this class when a small set of repeated experiments with suitably independent measurement noise is available. Stability of the estimated models is guaranteed without any assumptions on the input-output data. We first present a convex optimization scheme for identifying stable state-space models from empirical moments. Next, we provide a method for using repeated experiments to remove the effect of noise on these moment and model estimates. The technique is demonstrated on a simple simulated example.
In this paper, we propose a new approach to design globally convergent reduced-order observers for nonlinear control systems via contraction analysis and convex optimization. Despite the fact that contraction is a concept naturally suitable for state estimation, the existing solutions are either local or relatively conservative when applying to physical systems. To address this, we show that this problem can be translated into an off-line search for a coordinate transformation after which the dynamics is (transversely) contracting. The obtained sufficient condition consists of some easily verifiable differential inequalities, which, on one hand, identify a very general class of detectable nonlinear systems, and on the other hand, can be expressed as computationally efficient convex optimization, making the design procedure more systematic. Connections with some well-established approaches and concepts are also clarified in the paper. Finally, we illustrate the proposed method with several numerical and physical examples, including polynomial, mechanical, electromechanical and biochemical systems.