No Arabic abstract
This work studies distributed (probability) density estimation of large-scale systems. Such problems are motivated by many density-based distributed control tasks in which the real-time density of the swarm is used as feedback information, such as sensor deployment and city traffic scheduling. This work is built upon our previous work [1] which presented a (centralized) density filter to estimate the dynamic density of large-scale systems through a novel integration of mean-field models, kernel density estimation (KDE), and infinite-dimensional Kalman filters. In this work, we further study how to decentralize the density filter such that each agent can estimate the global density only based on its local observation and communication with neighbors. This is achieved by noting that the global observation constructed by KDE is an average of the local kernels. Hence, dynamic average consensus algorithms are used for each agent to track the global observation in a distributed way. We present a distributed density filter which requires very little information exchange, and study its stability and optimality using the notion of input-to-state stability. Simulation results suggest that the distributed filter is able to converge to the centralized filter and remain close to it.
This work studies how to estimate the mean-field density of large-scale systems in a distributed manner. Such problems are motivated by the recent swarm control technique that uses mean-field approximations to represent the collective effect of the swarm, wherein the mean-field density (and its gradient) is usually used in feedback control design. In the first part, we formulate the density estimation problem as a filtering problem of the associated mean-field partial differential equation (PDE), for which we employ kernel density estimation (KDE) to construct noisy observations and use filtering theory of PDE systems to design an optimal (centralized) density filter. It turns out that the covariance operator of observation noise depends on the unknown density. Hence, we use approximations for the covariance operator to obtain a suboptimal density filter, and prove that both the density estimates and their gradient are convergent and remain close to the optimal one using the notion of input-to-state stability (ISS). In the second part, we continue to study how to decentralize the density filter such that each agent can estimate the mean-field density based on only its own position and local information exchange with neighbors. We prove that the local density filter is also convergent and remains close to the centralized one in the sense of ISS. Simulation results suggest that the (centralized) suboptimal density filter is able to generate convergent density estimates, and the local density filter is able to converge and remain close to the centralized filter.
Large-scale agent systems have foreseeable applications in the near future. Estimating their macroscopic density is critical for many density-based optimization and control tasks, such as sensor deployment and city traffic scheduling. In this paper, we study the problem of estimating their dynamically varying probability density, given the agents individual dynamics (which can be nonlinear and time-varying) and their states observed in real-time. The density evolution is shown to satisfy a linear partial differential equation uniquely determined by the agents dynamics. We present a density filter which takes advantage of the system dynamics to gradually improve its estimation and is scalable to the agents population. Specifically, we use kernel density estimators (KDE) to construct a noisy measurement and show that, when the agents population is large, the measurement noise is approximately ``Gaussian. With this important property, infinite-dimensional Kalman filters are used to design density filters. It turns out that the covariance of measurement noise depends on the true density. This state-dependence makes it necessary to approximate the covariance in the associated operator Riccati equation, rendering the density filter suboptimal. The notion of input-to-state stability is used to prove that the performance of the suboptimal density filter remains close to the optimal one. Simulation results suggest that the proposed density filter is able to quickly recognize the underlying modes of the unknown density and automatically ignore outliers, and is robust to different choices of kernel bandwidth of KDE.
This work studies the problem of controlling the probability density of large-scale stochastic systems, which has applications in various fields such as swarm robotics. Recently, there is a growing amount of literature that employs partial differential equations (PDEs) to model the density evolution and uses density feedback to design control laws which, by acting on individual systems, stabilize their density towards to a target profile. In spite of its stability property and computational efficiency, the success of density feedback relies on assuming the systems to be homogeneous first-order integrators (plus white noise) and ignores higher-order dynamics, making it less applicable in practice. In this work, we present a backstepping design algorithm that extends density control to heterogeneous and higher-order stochastic systems in strict-feedback forms. We show that the strict-feedback form in the individual level corresponds to, in the collective level, a PDE (of densities) distributedly driven by a collection of heterogeneous stochastic systems. The presented backstepping design then starts with a density feedback design for the PDE, followed by a sequence of stabilizing design for the remaining stochastic systems. We present a candidate control law with stability proof and apply it to nonholonomic mobile robots. A simulation is included to verify the effectiveness of the algorithm.
This paper aims to create a secure environment for networked control systems composed of multiple dynamic entities and computational control units via networking, in the presence of disclosure attacks. In particular, we consider the situation where some dynamic entities or control units are vulnerable to attacks and can become malicious. Our objective is to ensure that the input and output data of the benign entities are protected from the malicious entities as well as protected when they are transferred over the networks in a distributed environment. Both these security requirements are achieved using cryptographic techniques. However, the use of cryptographic mechanisms brings additional challenges to the design of controllers in the encrypted state space; the closed-loop system gains and states are required to match the specified cryptographic algorithms. In this paper, we propose a methodology for the design of secure networked control systems integrating the cryptographic mechanisms with the control algorithms. The approach is based on the separation principle, with the cryptographic techniques addressing the security requirements and the control algorithms satisfying their performance requirements.
This paper presents a network hardware-in-the-loop (HIL) simulation system for modeling large-scale power systems. Researchers have developed many HIL test systems for power systems in recent years. Those test systems can model both microsecond-level dynamic responses of power electronic systems and millisecond-level transients of transmission and distribution grids. By integrating individual HIL test systems into a network of HIL test systems, we can create large-scale power grid digital twins with flexible structures at required modeling resolution that fits for a wide range of system operating conditions. This will not only significantly reduce the need for field tests when developing new technologies but also greatly shorten the model development cycle. In this paper, we present a networked OPAL-RT based HIL test system for developing transmission-distribution coordinative Volt-VAR regulation technologies as an example to illustrate system setups, communication requirements among different HIL simulation systems, and system connection mechanisms. Impacts of communication delays, information exchange cycles, and computing delays are illustrated. Simulation results show that the performance of a networked HIL test system is satisfactory.