No Arabic abstract
We present a method for incremental modeling and time-varying control of unknown nonlinear systems. The method combines elements of evolving intelligence, granular machine learning, and multi-variable control. We propose a State-Space Fuzzy-set-Based evolving Modeling (SS-FBeM) approach. The resulting fuzzy model is structurally and parametrically developed from a data stream with focus on memory and data coverage. The fuzzy controller also evolves, based on the data instances and fuzzy model parameters. Its local gains are redesigned in real-time -- whenever the corresponding local fuzzy models change -- from the solution of a linear matrix inequality problem derived from a fuzzy Lyapunov function and bounded input conditions. We have shown one-step prediction and asymptotic stabilization of the Henon chaos.
This paper concentrates on the study of the decentralized fuzzy control method for a class of fractional-order interconnected systems with unknown control directions. To overcome the difficulties caused by the multiple unknown control directions in fractional-order systems, a novel fractional-order Nussbaum function technique is proposed. This technique is much more general than those of existing works since it not only handles single/multiple unknown control directions but is also suitable for fractional/integer-order single/interconnected systems. Based on this technique, a new decentralized adaptive control method is proposed for fractional-order interconnected systems. Smooth functions are introduced to compensate for unknown interactions among subsystems adaptively. Furthermore, fuzzy logic systems are utilized to approximate unknown nonlinearities. It is proven that the designed controller can guarantee the boundedness of all signals in interconnected systems and the convergence of tracking errors. Two examples are given to show the validity of the proposed method.
In this paper, a distributed learning leader-follower consensus protocol based on Gaussian process regression for a class of nonlinear multi-agent systems with unknown dynamics is designed. We propose a distributed learning approach to predict the residual dynamics for each agent. The stability of the consensus protocol using the data-driven model of the dynamics is shown via Lyapunov analysis. The followers ultimately synchronize to the leader with guaranteed error bounds by applying the proposed control law with a high probability. The effectiveness and the applicability of the developed protocol are demonstrated by simulation examples.
In this paper, we equip the conventional discrete-time queueing network with a Markovian input process, that, in addition to the usual short-term stochastics, governs the mid- to long-term behavior of the links between the network nodes. This is reminiscent of so-called Jump-Markov systems in control theory and allows the network topology to change over time. We argue that the common back-pressure control policy is inadequate to control such network dynamics and propose a novel control policy inspired by the paradigms of model-predictive control. Specifically, by defining a suitable but arbitrary prediction horizon, our policy takes into account the future network states and possible control actions. This stands in clear contrast to most other policies which are myopic, i.e. only consider the next state. We show numerically that such an approach can significantly improve the control performance and introduce several variants, thereby trading off performance versus computational complexity. In addition, we prove so-called throughput optimality of our policy which guarantees stability for all network flows that can be maintained by the network. Interestingly, in contrast to general stability proofs in model-predictive control, our proof does not require the assumption of a terminal set (i.e. for the prediction horizon to be large enough). Finally, we provide several illustrating examples, one of which being a network of synchronized queues. This one in particular constitutes an interesting system class, in which our policy exerts superiority over general back-pressure policies, that even lose their throughput optimality in those networks.
This paper investigates bilateral control of teleoperators with closed architecture and subjected to arbitrary bounded time-varying delay. A prominent challenge for bilateral control of such teleoperators lies in the closed architecture, especially in the context not involving interaction force/torque measurement. This yields the long-standing situation that most bilateral control rigorously developed in the literature is hard to be justified as applied to teleoperators with closed architecture. With a new class of dynamic feedback, we propose kinematic and adaptive dynamic controllers for teleoperators with closed architecture, and we show that the proposed kinematic and dynamic controllers are robust with respect to arbitrary bounded time-varying delay. In addition, by exploiting the input-output properties of an inverted form of the dynamics of robot manipulators with closed architecture, we remove the assumption of uniform exponential stability of a linear time-varying system due to the adaptation to the gains of the inner controller in demonstrating stability of the presented adaptive dynamic control. The application of the proposed approach is illustrated by the experimental results using a Phantom Omni and a UR10 robot.
This paper deals with the computation of the largest robust control invariant sets (RCISs) of constrained nonlinear systems. The proposed approach is based on casting the search for the invariant set as a graph theoretical problem. Specifically, a general class of discrete-time time-invariant nonlinear systems is considered. First, the dynamics of a nonlinear system is approximated with a directed graph. Subsequently, the condition for robust control invariance is derived and an algorithm for computing the robust control invariant set is presented. The algorithm combines the iterative subdivision technique with the robust control invariance condition to produce outer approximations of the largest robust control invariant set at each iteration. Following this, we prove convergence of the algorithm to the largest RCIS as the iterations proceed to infinity. Based on the developed algorithms, an algorithm to compute inner approximations of the RCIS is also presented. A special case of input affine and disturbance affine systems is also considered. Finally, two numerical examples are presented to demonstrate the efficacy of the proposed method.