ترغب بنشر مسار تعليمي؟ اضغط هنا

Reduced-Order Nonlinear Observers via Contraction Analysis and Convex Optimization

107   0   0.0 ( 0 )
 نشر من قبل Bowen Yi
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we propose a new approach to design globally convergent reduced-order observers for nonlinear control systems via contraction analysis and convex optimization. Despite the fact that contraction is a concept naturally suitable for state estimation, the existing solutions are either local or relatively conservative when applying to physical systems. To address this, we show that this problem can be translated into an off-line search for a coordinate transformation after which the dynamics is (transversely) contracting. The obtained sufficient condition consists of some easily verifiable differential inequalities, which, on one hand, identify a very general class of detectable nonlinear systems, and on the other hand, can be expressed as computationally efficient convex optimization, making the design procedure more systematic. Connections with some well-established approaches and concepts are also clarified in the paper. Finally, we illustrate the proposed method with several numerical and physical examples, including polynomial, mechanical, electromechanical and biochemical systems.



قيم البحث

اقرأ أيضاً

This paper introduces new techniques for using convex optimization to fit input-output data to a class of stable nonlinear dynamical models. We present an algorithm that guarantees consistent estimates of models in this class when a small set of repe ated experiments with suitably independent measurement noise is available. Stability of the estimated models is guaranteed without any assumptions on the input-output data. We first present a convex optimization scheme for identifying stable state-space models from empirical moments. Next, we provide a method for using repeated experiments to remove the effect of noise on these moment and model estimates. The technique is demonstrated on a simple simulated example.
We propose a convex optimization procedure for black-box identification of nonlinear state-space models for systems that exhibit stable limit cycles (unforced periodic solutions). It extends the robust identification error framework in which a convex upper bound on simulation error is optimized to fit rational polynomial models with a strong stability guarantee. In this work, we relax the stability constraint using the concepts of transverse dynamics and orbital stability, thus allowing systems with autonomous oscillations to be identified. The resulting optimization problem is convex, and can be formulated as a semidefinite program. A simulation-error bound is proved without assuming that the true system is in the model class, or that the number of measurements goes to infinity. Conditions which guarantee existence of a unique limit cycle of the model are proved and related to the model class that we search over. The method is illustrated by identifying a high-fidelity model from experimental recordings of a live rat hippocampal neuron in culture.
We propose a framework to use Nesterovs accelerated method for constrained convex optimization problems. Our approach consists of first reformulating the original problem as an unconstrained optimization problem using a continuously differentiable ex act penalty function. This reformulation is based on replacing the Lagrange multipliers in the augmented Lagrangian of the original problem by Lagrange multiplier functions. The expressions of these Lagrange multiplier functions, which depend upon the gradients of the objective function and the constraints, can make the unconstrained penalty function non-convex in general even if the original problem is convex. We establish sufficient conditions on the objective function and the constraints of the original problem under which the unconstrained penalty function is convex. This enables us to use Nesterovs accelerated gradient method for unconstrained convex optimization and achieve a guaranteed rate of convergence which is better than the state-of-the-art first-order algorithms for constrained convex optimization. Simulations illustrate our results.
Atangana and Baleanu proposed a new fractional derivative with non-local and no-singular Mittag-Leffler kernel to solve some problems proposed by researchers in the field of fractional calculus. This new derivative is better to describe essential asp ects of non-local dynamical systems. We present some results regarding Lyapunov stability theory, particularly the Lyapunov Direct Method for fractional-order systems modeled with Atangana-Baleanu derivatives and some significant inequalities that help to develop the theoretical analysis. As applications in control theory, some algorithms of state estimation are proposed for linear and nonlinear fractional-order systems.
The basic reproduction number $R_0$ is a fundamental quantity in epidemiological modeling, reflecting the typical number of secondary infections that arise from a single infected individual. While $R_0$ is widely known to scientists, policymakers, an d the general public, it has received comparatively little attention in the controls community. This note provides two novel characterizations of $R_0$: a stability characterization and a geometric program characterization. The geometric program characterization allows us to write $R_0$-constrained and budget-constrained optimal resource allocation problems as geometric programs, which are easily transformed into convex optimization problems. We apply these programs to a case study of allocating vaccines and antidotes, finding that targeting $R_0$ instead of the spectral abscissa of the Jacobian matrix (a common target in the controls literature) leads to qualitatively different solutions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا