ترغب بنشر مسار تعليمي؟ اضغط هنا

Prescribed Performance Distance-Based Formation Control of Multi-Agent Systems (Extended Version)

87   0   0.0 ( 0 )
 نشر من قبل Farhad Mehdifar
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper presents a novel control protocol for robust distance-based formation control with prescribed performance in which agents are subjected to unknown external disturbances. Connectivity maintenance and collision avoidance among neighboring agents are also handled by the appropriate design of certain performance bounds that constrain the inter-agent distance errors. As an extension to the proposed scheme, distance-based formation centroid maneuvering is also studied for disturbance-free agents, in which the formation centroid tracks a desired time-varying velocity. The proposed control laws are decentralized, in the sense that each agent employs local relative information regarding its neighbors to calculate its control signal. Therefore, the control scheme is implementable on the agents local coordinate frames. Using rigid graph theory, input-to-state stability, and Lyapunov based analysis, the results are established for minimally and infinitesimally rigid formations in 2-D or 3-D space. Furthermore, it is argued that the proposed approach increases formation robustness against shape distortions and can prevent formation convergence to incorrect shapes, which is likely to happen in conventional distance-based formation control methods. Finally, extensive simulation studies clarify and verify the proposed approach.



قيم البحث

اقرأ أيضاً

This work proposes a novel 2-D formation control scheme for acyclic triangulated directed graphs (a class of minimally acyclic persistent graphs) based on bipolar coordinates with (almost) global convergence to the desired shape. Prescribed performan ce control is employed to devise a decentralized control law that avoids singularities and introduces robustness against external disturbances while ensuring predefined transient and steady-state performance for the closed-loop system. Furthermore, it is shown that the proposed formation control scheme can handle formation maneuvering, scaling, and orientation specifications simultaneously. Additionally, the proposed control law is implementable in the agents arbitrarily oriented local coordinate frames using only low-cost onboard vision sensors, which are favorable for practical applications. Finally, various simulation studies clarify and verify the proposed approach.
A multi-agent system designed to achieve distance-based shape control with flocking behavior can be seen as a mechanical system described by a Lagrangian function and subject to additional external forces. Forced variational integrators are given by the discretization of Lagrange-dAlembert principle for systems subject to external forces, and have proved useful for numerical simulation studies of complex dynamical systems. We derive forced variational integrators that can be employed in the context of control algorithms for distance-based shape with velocity consensus. In particular, we provide an accurate numerical integrator with a lower computational cost than traditional solutions, while preserving the configuration space and symmetries. We also provide an explicit expression for the integration scheme in the case of an arbitrary number of agents with double integrator dynamics. For a numerical comparison of the performances, we use a planar formation consisting of three autonomous agents.
The problem of time-constrained multi-agent task scheduling and control synthesis is addressed. We assume the existence of a high level plan which consists of a sequence of cooperative tasks, each of which is associated with a deadline and several Qu ality-of-Service levels. By taking into account the reward and cost of satisfying each task, a novel scheduling problem is formulated and a path synthesis algorithm is proposed. Based on the obtained plan, a distributed hybrid control law is further designed for each agent. Under the condition that only a subset of the agents are aware of the high level plan, it is shown that the proposed controller guarantees the satisfaction of time constraints for each task. A simulation example is given to verify the theoretical results.
Distributed algorithms for both discrete-time and continuous-time linearly solvable optimal control (LSOC) problems of networked multi-agent systems (MASs) are investigated in this paper. A distributed framework is proposed to partition the optimal c ontrol problem of a networked MAS into several local optimal control problems in factorial subsystems, such that each (central) agent behaves optimally to minimize the joint cost function of a subsystem that comprises a central agent and its neighboring agents, and the local control actions (policies) only rely on the knowledge of local observations. Under this framework, we not only preserve the correlations between neighboring agents, but moderate the communication and computational complexities by decentralizing the sampling and computational processes over the network. For discrete-time systems modeled by Markov decision processes, the joint Bellman equation of each subsystem is transformed into a system of linear equations and solved using parallel programming. For continuous-time systems modeled by It^o diffusion processes, the joint optimality equation of each subsystem is converted into a linear partial differential equation, whose solution is approximated by a path integral formulation and a sample-efficient relative entropy policy search algorithm, respectively. The learned control policies are generalized to solve the unlearned tasks by resorting to the compositionality principle, and illustrative examples of cooperative UAV teams are provided to verify the effectiveness and advantages of these algorithms.
This paper develops an efficient multi-agent deep reinforcement learning algorithm for cooperative controls in powergrids. Specifically, we consider the decentralized inverter-based secondary voltage control problem in distributed generators (DGs), w hich is first formulated as a cooperative multi-agent reinforcement learning (MARL) problem. We then propose a novel on-policy MARL algorithm, PowerNet, in which each agent (DG) learns a control policy based on (sub-)global reward but local states from its neighboring agents. Motivated by the fact that a local control from one agent has limited impact on agents distant from it, we exploit a novel spatial discount factor to reduce the effect from remote agents, to expedite the training process and improve scalability. Furthermore, a differentiable, learning-based communication protocol is employed to foster the collaborations among neighboring agents. In addition, to mitigate the effects of system uncertainty and random noise introduced during on-policy learning, we utilize an action smoothing factor to stabilize the policy execution. To facilitate training and evaluation, we develop PGSim, an efficient, high-fidelity powergrid simulation platform. Experimental results in two microgrid setups show that the developed PowerNet outperforms a conventional model-based control, as well as several state-of-the-art MARL algorithms. The decentralized learning scheme and high sample efficiency also make it viable to large-scale power grids.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا