Do you want to publish a course? Click here

A Suboptimality Approach to Distributed $mathcal{H}_2$ Optimal Control

67   0   0.0 ( 0 )
 Added by Junjie Jiao
 Publication date 2018
  fields
and research's language is English




Ask ChatGPT about the research

This paper deals with the distributed $mathcal{H}_2$ optimal control problem for linear multi-agent systems. In particular, we consider a suboptimal version of the distributed $mathcal{H}_2$ optimal control problem. Given a linear multi-agent system with identical agent dynamics and an associated $mathcal{H}_2$ cost functional, our aim is to design a distributed diffusive static protocol such that the protocol achieves state synchronization for the controlled network and such that the associated cost is smaller than an a priori given upper bound. We first analyze the $mathcal{H}_2$ performance of linear systems and then apply the results to linear multi-agent systems. Two design methods are provided to compute such a suboptimal distributed protocol. For each method, the expression for the local control gain involves a solution of a single Riccati inequality of dimension equal to the dimension of the individual agent dynamics, and the smallest nonzero and the largest eigenvalue of the graph Laplacian.



rate research

Read More

This paper is concerned with the distributed linear quadratic optimal control problem. In particular, we consider a suboptimal version of the distributed optimal control problem for undirected multi-agent networks. Given a multi-agent system with identical agent dynamics and an associated global quadratic cost functional, our objective is to design suboptimal distributed control laws that guarantee the controlled network to reach consensus and the associated cost to be smaller than an a priori given upper bound. We first analyze the suboptimality for a given linear system and then apply the results to linear multiagent systems. Two design methods are then provided to compute such suboptimal distributed controllers, involving the solution of a single Riccati inequality of dimension equal to the dimension of the agent dynamics, and the smallest nonzero and the largest eigenvalue of the graph Laplacian. Furthermore, we relax the requirement of exact knowledge of the smallest nonzero and largest eigenvalue of the graph Laplacian by using only lower and upper bounds on these eigenvalues. Finally, a simulation example is provided to illustrate our design method.
This paper deals with suboptimal distributed H2 control by dynamic output feedback for homogeneous linear multi-agent systems. Given a linear multi-agent system, together with an associated H2 cost functional, the objective is to design dynamic output feedback protocols that guarantee the associated cost to be smaller than an a priori given upper bound while synchronizing the controlled network. A design method is provided to compute such protocols. The computation of the two local gains in these protocols involves two Riccati inequalities, each of dimension equal to the dimension of the state space of the agents. The largest and smallest nonzero eigenvalue of the Laplacian matrix of the network graph are also used in the computation of one of the two local gains.A simulation example is provided to illustrate the performance of the proposed protocols.
In this paper, we extend the results from Jiao et al. (2019) on distributed linear quadratic control for leaderless multi-agent systems to the case of distributed linear quadratic tracking control for leader-follower multi-agent systems. Given one autonomous leader and a number of homogeneous followers, we introduce an associated global quadratic cost functional. We assume that the leader shares its state information with at least one of the followers and the communication between the followers is represented by a connected simple undirected graph. Our objective is to design distributed control laws such that the controlled network reaches tracking consensus and, moreover, the associated cost is smaller than a given tolerance for all initial states bounded in norm by a given radius. We establish a centralized design method for computing such suboptimal control laws, involving the solution of a single Riccati inequality of dimension equal to the dimension of the local agent dynamics, and the smallest and the largest eigenvalue of a given positive definite matrix involving the underlying graph. The proposed design method is illustrated by a simulation example.
In this effort, a novel operator theoretic framework is developed for data-driven solution of optimal control problems. The developed methods focus on the use of trajectories (i.e., time-series) as the fundamental unit of data for the resolution of optimal control problems in dynamical systems. Trajectory information in the dynamical systems is embedded in a reproducing kernel Hilbert space (RKHS) through what are called occupation kernels. The occupation kernels are tied to the dynamics of the system through the densely defined Liouville operator. The pairing of Liouville operators and occupation kernels allows for lifting of nonlinear finite-dimensional optimal control problems into the space of infinite-dimensional linear programs over RKHSs.
We propose a neural network approach for solving high-dimensional optimal control problems. In particular, we focus on multi-agent control problems with obstacle and collision avoidance. These problems immediately become high-dimensional, even for moderate phase-space dimensions per agent. Our approach fuses the Pontryagin Maximum Principle and Hamilton-Jacobi-Bellman (HJB) approaches and parameterizes the value function with a neural network. Our approach yields controls in a feedback form for quick calculation and robustness to moderate disturbances to the system. We train our model using the objective function and optimality conditions of the control problem. Therefore, our training algorithm neither involves a data generation phase nor solutions from another algorithm. Our model uses empirically effective HJB penalizers for efficient training. By training on a distribution of initial states, we ensure the controls optimality is achieved on a large portion of the state-space. Our approach is grid-free and scales efficiently to dimensions where grids become impractical or infeasible. We demonstrate our approachs effectiveness on a 150-dimensional multi-agent problem with obstacles.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا