Do you want to publish a course? Click here

Physics-Based Causal Lifting Linearization of Nonlinear Control Systems Underpinned by the Koopman Operator

90   0   0.0 ( 0 )
 Added by Nicholas Selby
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Methods for constructing causal linear models from nonlinear dynamical systems through lifting linearization underpinned by Koopman operator and physical system modeling theory are presented. Outputs of a nonlinear control system, called observables, may be functions of state and input, $phi(x,u)$. These input-dependent observables cannot be used for lifting the system because the state equations in the augmented space contain the time derivatives of input and are therefore anticausal. Here, the mechanism of creating anticausal observables is examined, and two methods for solving the causality problem in lifting linearization are presented. The first method is to replace anticausal observables by their integral variables $phi^*$, and lift the dynamics with $phi^*$, so that the time derivative of $phi^*$ does not include the time derivative of input. The other method is to alter the original physical model by adding a small inertial element, or a small capacitive element, so that the systems causal relationship changes. These augmented dynamics alter the signal path from the input to the anticausal observable so that the observables are not dependent on inputs. Numerical simulations validate the effectiveness of the methods.



rate research

Read More

The Koopman operator allows for handling nonlinear systems through a (globally) linear representation. In general, the operator is infinite-dimensional - necessitating finite approximations - for which there is no overarching framework. Although there are principled ways of learning such finite approximations, they are in many instances overlooked in favor of, often ill-posed and unstructured methods. Also, Koopman operator theory has long-standing connections to known system-theoretic and dynamical system notions that are not universally recognized. Given the former and latter realities, this work aims to bridge the gap between various concepts regarding both theory and tractable realizations. Firstly, we review data-driven representations (both unstructured and structured) for Koopman operator dynamical models, categorizing various existing methodologies and highlighting their differences. Furthermore, we provide concise insight into the paradigms relation to system-theoretic notions and analyze the prospect of using the paradigm for modeling control systems. Additionally, we outline the current challenges and comment on future perspectives.
In this paper we prove new connections between two frameworks for analysis and control of nonlinear systems: the Koopman operator framework and contraction analysis. Each method, in different ways, provides exact and global analyses of nonlinear systems by way of linear systems theory. The main results of this paper show equivalence between contraction and Koopman approaches for a wide class of stability analysis and control design problems. In particular: stability or stablizability in the Koopman framework implies the existence of a contraction metric (resp. control contraction metric) for the nonlinear system. Further in certain cases the converse holds: contraction implies the existence of a set of observables with which stability can be verified via the Koopman framework. We provide results for the cases of autonomous and time-varying systems, as well as orbital stability of limit cycles. Furthermore, the converse claims are based on a novel relation between the Koopman method and construction of a Kazantzis-Kravaris-Luenberger observer. We also provide a byproduct of the main results, that is, a new method to learn contraction metrics from trajectory data via linear system identification.
In many applications, and in systems/synthetic biology, in particular, it is desirable to compute control policies that force the trajectory of a bistable system from one equilibrium (the initial point) to another equilibrium (the target point), or in other words to solve the switching problem. It was recently shown that, for monotone bistable systems, this problem admits easy-to-implement open-loop solutions in terms of temporal pulses (i.e., step functions of fixed length and fixed magnitude). In this paper, we develop this idea further and formulate a problem of convergence to an equilibrium from an arbitrary initial point. We show that this problem can be solved using a static optimization problem in the case of monotone systems. Changing the initial point to an arbitrary state allows to build closed-loop, event-based or open-loop policies for the switching/convergence problems. In our derivations we exploit the Koopman operator, which offers a linear infinite-dimensional representation of an autonomous nonlinear system. One of the main advantages of using the Koopman operator is the powerful computational tools developed for this framework. Besides the presence of numerical solutions, the switching/convergence problem can also serve as a building block for solving more complicated control problems and can potentially be applied to non-monotone systems. We illustrate this argument on the problem of synchronizing cardiac cells by defibrillation. Potentially, our approach can be extended to problems with different parametrizations of control signals since the only fundamental limitation is the finite time application of the control signal.
Self-triggered control (STC) is a well-established technique to reduce the amount of samples for sampled-data systems, and is hence particularly useful for Networked Control Systems. At each sampling instant, an STC mechanism determines not only an updated control input but also when the next sample should be taken. In this paper, a dynamic STC mechanism for nonlinear systems is proposed. The mechanism incorporates a dynamic variable for determining the next sampling instant. Such a dynamic variable for the trigger decision has been proven to be a powerful tool for increasing sampling intervals in the closely related concept of event-triggered control, but was so far not exploited for STC. This gap is closed in this paper. For the proposed mechanism, the dynamic variable is chosen to be the filtered values of the Lyapunov function at past sampling instants. The next sampling instant is, based on the dynamic variable and on hybrid Lyapunov function techniques, chosen such that an average decrease of the Lyapunov function is ensured. The proposed mechanism is illustrated with a numerical example from the literature. For this example, the obtained sampling intervals are significantly larger than for existing static STC mechanisms. This paper is the accepted version of [1], containing also proofs of the main results.
Koopman operator theory has served as the basis to extract dynamics for nonlinear system modeling and control across settings, including non-holonomic mobile robot control. There is a growing interest in research to derive robustness (and/or safety) guarantees for systems the dynamics of which are extracted via the Koopman operator. In this paper, we propose a way to quantify the prediction error because of noisy measurements when the Koopman operator is approximated via Extended Dynamic Mode Decomposition. We further develop an enhanced robot control strategy to endow robustness to a class of data-driven (robotic) systems that rely on Koopman operator theory, and we show how part of the strategy can happen offline in an effort to make our algorithm capable of real-time implementation. We perform a parametric study to evaluate the (theoretical) performance of the algorithm using a Van der Pol oscillator and conduct a series of simulated experiments in Gazebo using a non-holonomic wheeled robot.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا