No Arabic abstract
We consider the problem of controlling the group behavior of a large number of dynamic systems that are constantly interacting with each other. These systems are assumed to have identical dynamics (e.g., birds flock, robot swarm) and their group behavior can be modeled by a distribution. Thus, this problem can be viewed as an optimal control problem over the space of distributions. We propose a novel algorithm to compute a feedback control strategy so that, when adopted by the agents, the distribution of them would be transformed from an initial one to a target one over a finite time window. Our method is built on optimal transport theory but differs significantly from existing work in this area in that our method models the interactions among agents explicitly. From an algorithmic point of view, our algorithm is based on a generalized version of the proximal gradient descent algorithm and has a convergence guarantee with a sublinear rate. We further extend our framework to account for the scenarios where the agents are from multiple species. In the linear quadratic setting, the solution is characterized by coupled Riccati equations which can be solved in closed-form. Finally, several numerical examples are presented to illustrate our framework.
This paper studies an optimal consensus problem for a group of heterogeneous high-order agents with unknown control directions. Compared with existing consensus results, the consensus point is further required to an optimal solution to some distributed optimization problem. To solve this problem, we first augment each agent with an optimal signal generator to reproduce the global optimal point of the given distributed optimization problem, and then complete the global optimal consensus design by developing some adaptive tracking controllers for these augmented agents. Moreover, we present an extension when only real-time gradients are available. The trajectories of all agents in both cases are shown to be well-defined and achieve the expected consensus on the optimal point. Two numerical examples are given to verify the efficacy of our algorithms.
This paper deals with the H2 suboptimal output synchronization problem for heterogeneous linear multi-agent systems. Given a multi-agent system with possibly distinct agents and an associated H2 cost functional, the aim is to design output feedback based protocols that guarantee the associated cost to be smaller than a given upper bound while the controlled network achieves output synchronization. A design method is provided to compute such protocols. For each agent, the computation of its two local control gains involves two Riccati inequalities, each of dimension equal to the state space dimension of the agent. A simulation example is provided to illustrate the performance of the proposed protocols.
This paper deals with data-driven output synchronization for heterogeneous leader-follower linear multi-agent systems. Given a multi-agent system that consists of one autonomous leader and a number of heterogeneous followers with external disturbances, we provide necessary and sufficient data-based conditions for output synchronization. We also provide a design method for obtaining such output synchronizing protocols directly from data. The results are then extended to the special case that the followers are disturbance-free. Finally, a simulation example is provided to illustrate our results.
Robust control is a core approach for controlling systems with performance guarantees that are robust to modeling error, and is widely used in real-world systems. However, current robust control approaches can only handle small system uncertainty, and thus require significant effort in system identification prior to controller design. We present an online approach that robustly controls a nonlinear system under large model uncertainty. Our approach is based on decomposing the problem into two sub-problems, robust control design (which assumes small model uncertainty) and chasing consistent models, which can be solved using existing tools from control theory and online learning, respectively. We provide a learning convergence analysis that yields a finite mistake bound on the number of times performance requirements are not met and can provide strong safety guarantees, by bounding the worst-case state deviation. To the best of our knowledge, this is the first approach for online robust control of nonlinear systems with such learning theoretic and safety guarantees. We also show how to instantiate this framework for general robotic systems, demonstrating the practicality of our approach.
We consider the covariance steering problem for nonlinear control-affine systems. Our objective is to find an optimal control strategy to steer the state of a system from an initial distribution to a target one whose mean and covariance are given. Due to the nonlinearity, the existing techniques for linear covariance steering problems are not directly applicable. By leveraging the celebrated Girsanov theorem, we formulate the problem into an optimization over the space path distributions. We then adopt a generalized proximal gradient algorithm to solve this optimization, where each update requires solving a linear covariance steering problem. Our algorithm is guaranteed to converge to a local optimal solution with a sublinear rate. In addition, each iteration of the algorithm can be achieved in closed form, and thus the computational complexity of it is insensitive to the resolution of time-discretization.