No Arabic abstract
We address the optimal dynamic formation problem in mobile leader-follower networks where an optimal formation is generated to maximize a given objective function while continuously preserving connectivity. We show that in a convex mission space, the connectivity constraints can be satisfied by any feasible solution to a mixed integer nonlinear optimization problem. When the optimal formation objective is to maximize coverage in a mission space cluttered with obstacles, we separate the process into intervals with no obstacles detected and intervals where one or more obstacles are detected. In the latter case, we propose a minimum-effort reconfiguration approach for the formation which still optimizes the objective function while avoiding the obstacles and ensuring connectivity. We include simulation results illustrating this dynamic formation process.
We study the general formation problem for a group of mobile agents in a plane, in which the agents are required to maintain a distribution pattern, as well as to rotate around or remain static relative to a static/moving target. The prescribed distribution pattern is a class of general formations that the distances between neighboring agents or the distances from each agent to the target do not need to be equal. Each agent is modeled as a double integrator and can merely perceive the relative information of the target and its neighbors. A distributed control law is designed using the limit-cycle based idea to solve the problem. One merit of the controller is that it can be implemented by each agent in its Frenet-Serret frame so that only local information is utilized without knowing global information. Theoretical analysis is provided of the equilibrium of the N-agent system and of the convergence of its converging part. Numerical simulations are given to show the effectiveness and performance of the proposed controller.
This paper studies an optimal consensus problem for a group of heterogeneous high-order agents with unknown control directions. Compared with existing consensus results, the consensus point is further required to an optimal solution to some distributed optimization problem. To solve this problem, we first augment each agent with an optimal signal generator to reproduce the global optimal point of the given distributed optimization problem, and then complete the global optimal consensus design by developing some adaptive tracking controllers for these augmented agents. Moreover, we present an extension when only real-time gradients are available. The trajectories of all agents in both cases are shown to be well-defined and achieve the expected consensus on the optimal point. Two numerical examples are given to verify the efficacy of our algorithms.
In this technical note, we investigate an optimal output consensus problem for heterogeneous uncertain nonlinear multi-agent systems. The considered agents are described by high-order nonlinear dynamics subject to both static and dynamic uncertainties. A two-step design, comprising sequential constructions of optimal signal generator and distributed partial stabilization feedback controller, is developed to overcome the difficulties brought by nonlinearities, uncertainties, and optimal requirements. Our study can not only assure an output consensus, but also achieve an optimal agreement characterized by a distributed optimization problem.
This paper addresses the problem of positive consensus of directed multi-agent systems with observer-type output-feedback protocols. More specifically, directed graph is used to model the communication topology of the multi-agent system and linear matrix inequalities (LMIs) are used in the consensus analysis in this paper. Using positive systems theory and graph theory, a convex programming algorithm is developed to design appropriate protocols such that the multi-agent system is able to reach consensus with its state trajectory always remaining in the non-negative orthant. Finally, numerical simulations are given to illustrate the effectiveness of the derived theoretical results.
The problem of controlling multi-agent systems under different models of information sharing among agents has received significant attention in the recent literature. In this paper, we consider a setup where rather than committing to a fixed information sharing protocol (e.g. periodic sharing or no sharing etc), agents can dynamically decide at each time step whether to share information with each other and incur the resulting communication cost. This setup requires a joint design of agents communication and control strategies in order to optimize the trade-off between communication costs and control objective. We first show that agents can ignore a big part of their private information without compromising the system performance. We then provide a common information approach based solution for the strategy optimization problem. This approach relies on constructing a fictitious POMDP whose solution (obtained via a dynamic program) characterizes the optimal strategies for the agents. We also show that our solution can be easily modified to incorporate constraints on when and how frequently agents can communicate.