No Arabic abstract
We study the formation control problem for a group of mobile agents in a plane, in which each agent is modeled as a kinematic point and can only use the local measurements in its local frame. The agents are required to maintain a geometric pattern while keeping a desired distance to a static/moving target. The prescribed formation is a general one which can be any geometric pattern, and the neighboring relationship of the N-agent system only has the requirement of containing a directed spanning tree. To solve the formation control problem, a distributed controller is proposed based on the idea of decoupled design. One merit of the controller is that it only uses each agents local measurements in its local frame, so that a practical issue that the lack of a global coordinate frame or a common reference direction for real multi-robot systems is successfully solved. Considering another practical issue of real robotic applications that sampled data is desirable instead of continuous-time signals, the sampled-data based controller is developed. Theoretical analysis of the convergence to the desired formation is provided for the multi-agent system under both the continuous-time controller with a static/moving target and the sampled-data based one with a static target. Numerical simulations are given to show the effectiveness and performance of the controllers.
We revisit and extend the Riccati theory, unifying continuous-time linear-quadratic optimal permanent and sampled-data control problems, in finite and infinite time horizons. In a nutshell, we prove that:-- when the time horizon T tends to $+infty$, one passes from the Sampled-Data Difference Riccati Equation (SD-DRE) to the Sampled-Data Algebraic Riccati Equation (SD-ARE), and from the Permanent Differential Riccati Equation (P-DRE) to the Permanent Algebraic Riccati Equation (P-ARE);-- when the maximal step of the time partition $Delta$ tends to $0$, one passes from (SD-DRE) to (P-DRE), and from (SD-ARE) to (P-ARE).Our notations and analysis provide a unified framework in order to settle all corresponding results.
Despite significant advances on distributed continuous-time optimization of multi-agent networks, there is still lack of an efficient algorithm to achieve the goal of distributed optimization at a pre-specified time. Herein, we design a specified-time distributed optimization algorithm for connected agents with directed topologies to collectively minimize the sum of individual objective functions subject to an equality constraint. With the designed algorithm, the settling time of distributed optimization can be exactly predefined. The specified selection of such a settling time is independent of not only the initial conditions of agents, but also the algorithm parameters and the communication topologies. Furthermore, the proposed algorithm can realize specified-time optimization by exchanging information among neighbours only at discrete sampling instants and thus reduces the communication burden. In addition, the equality constraint is always satisfied during the whole process, which makes the proposed algorithm applicable to online solving distributed optimization problems such as economic dispatch. For the special case of undirected communication topologies, a reduced-order algorithm is also designed. Finally, the effectiveness of the theoretical analysis is justified by numerical simulations.
In this paper, we propose a new self-triggered formulation of Model Predictive Control for continuous-time linear networked control systems. Our control approach, which aims at reducing the number of transmitting control samples to the plant, is derived by parallelly solving optimal control problems with different sampling time intervals. The controller then picks up one sampling pattern as a transmission decision, such that a reduction of communication load and the stability will be obtained. The proposed strategy is illustrated through comparative simulation examples.
We study sequences, parametrized by the number of agents, of many agent exit time stochastic control problems with risk-sensitive cost structure. We identify a fully characterizing assumption, under which each of such control problem corresponds to a risk-neutral stochastic control problem with additive cost, and sequentially to a risk-neutral stochastic control problem on the simplex, where the specific information about the state of each agent can be discarded. We also prove that, under some additional assumptions, the sequence of value functions converges to the value function of a deterministic control problem, which can be used for the design of nearly optimal controls for the original problem, when the number of agents is sufficiently large.
A new definition of continuous-time equilibrium controls is introduced. As opposed to the standard definition, which involves a derivative-type operation, the new definition parallels how a discrete-time equilibrium is defined, and allows for unambiguous economic interpretation. The terms strong equilibria and weak equilibria are coined for controls under the new and the standard definitions, respectively. When the state process is a time-homogeneous continuous-time Markov chain, a careful asymptotic analysis gives complete characterizations of weak and strong equilibria. Thanks to Kakutani-Fans fixed-point theorem, general existence of weak and strong equilibria is also established, under additional compactness assumption. Our theoretic results are applied to a two-state model under non-exponential discounting. In particular, we demonstrate explicitly that there can be incentive to deviate from a weak equilibrium, which justifies the need for strong equilibria. Our analysis also provides new results for the existence and characterization of discrete-time equilibria under infinite horizon.