No Arabic abstract
This paper presents an approach to target tracking that is based on a variable-gain integrator and the Newton-Raphson method for finding zeros of a function. Its underscoring idea is the determination of the feedback law by measurements of the systems output and estimation of its future state via lookahead simulation. The resulting feedback law is generally nonlinear. We first apply the proposed approach to tracking a constant reference by the output of nonlinear memoryless plants. Then we extend it in a number of directions, including the tracking of time-varying reference signals by dynamic, possibly unstable systems. The approach is new hence its analysis is preliminary, and theoretical results are derived for nonlinear memoryless plants and linear dynamic plants. However, the setting for the controller does not require the plant-system to be either linear or stable, and this is verified by simulation of an inverted pendulum tracking a time-varying signal. We also demonstrate results of laboratory experiments of controlling a platoon of mobile robots.
In this paper, the classical problem of tracking and regulation is studied in a data-driven context. The endosystem is assumed to be an unknown system that is interconnected to a known exosystem that generates disturbances and reference signals. The problem is to design a regulator so that the output of the (unknown) endosystem tracks the reference signal, regardless of its initial state and the incoming disturbances. In order to do this, we assume that we have a set of input-state data on a finite time-interval. We introduce the notion of data informativity for regulator design, and establish necessary and sufficient conditions for a given set of data to be informative. Also, formulas for suitable regulators are given in terms of the data. Our results are illustrated by means of two extended examples.
In this paper, we consider a discrete-time stochastic control problem with uncertain initial and target states. We first discuss the connection between optimal transport and stochastic control problems of this form. Next, we formulate a linear-quadratic regulator problem where the initial and terminal states are distributed according to specified probability densities. A closed-form solution for the optimal transport map in the case of linear-time varying systems is derived, along with an algorithm for computing the optimal map. Two numerical examples pertaining to swarm deployment demonstrate the practical applicability of the model, and performance of the numerical method.
This paper presents a performance-regulation method for a class of stochastic timed event-driven systems aimed at output tracking of a given reference setpoint. The systems are either Discrete Event Dynamic Systems (DEDS) such as queueing networks or Petri nets, or Hybrid Systems (HS) with time-driven dynamics and event-driven dynamics, like fluid queues and hybrid Petri nets. The regulator, designed for simplicity and speed of computation, is comprised of a single integrator having a variable gain to ensure effective tracking under time-varying plants. The gains computation is based on the Infinitesimal Perturbation Analysis (IPA) gradient of the plant function with respect to the control variable, and the resultant tracking can be quite robust with respect to modeling inaccuracies and gradient-estimation errors. The proposed technique is tested on examples taken from various application areas and modeled with different formalisms, including queueing models, Petri-net model of a production-inventory control system, and a stochastic DEDS model of a multicore chip control. Simulation results are presented in support of the proposed approach.
The convex analytic method (generalized by Borkar) has proved to be a very versatile method for the study of infinite horizon average cost optimal stochastic control problems. In this paper, we revisit the convex analytic method and make three primary contributions: (i) We present an existence result, under a near-monotone cost hypothesis, for controlled Markov models that lack weak continuity of the transition kernel but are strongly continuous in the action variable for every fixed state variable. (ii) For average cost stochastic control problems in standard Borel spaces, while existing results establish the optimality of stationary (possibly randomized) policies, few results are available on the optimality of stationary deterministic policies, and these are under rather restrictive hypotheses. We provide mild conditions under which an average cost optimal stochastic control problem admits optimal solutions that are deterministic and stationary, building upon a study of strategic measures by Feinberg. (iii) We establish conditions under which the performance under stationary deterministic policies is dense in the set of performance values under randomized stationary policies.
Hybrid AC/DC networks are a key technology for future electrical power systems, due to the increasing number of converter-based loads and distributed energy resources. In this paper, we consider the design of control schemes for hybrid AC/DC networks, focusing especially on the control of the interlinking converters (ILC(s)). We present two control schemes: firstly for decentralized primary control, and secondly, a distributed secondary controller. In the primary case, the stability of the controlled system is proven in a general hybrid AC/DC network which may include asynchronous AC subsystems. Furthermore, it is demonstrated that power-sharing across the AC/DC network is significantly improved compared to previously proposed dual droop control. The proposed scheme for secondary control guarantees the convergence of the AC system frequencies and the average DC voltage of each DC subsystem to their nominal values respectively. An optimal power allocation is also achieved at steady-state. The applicability and effectiveness of the proposed algorithms are verified by simulation on a test hybrid AC/DC network in MATLAB / Simscape Power Systems.