No Arabic abstract
This work studies how to estimate the mean-field density of large-scale systems in a distributed manner. Such problems are motivated by the recent swarm control technique that uses mean-field approximations to represent the collective effect of the swarm, wherein the mean-field density (and its gradient) is usually used in feedback control design. In the first part, we formulate the density estimation problem as a filtering problem of the associated mean-field partial differential equation (PDE), for which we employ kernel density estimation (KDE) to construct noisy observations and use filtering theory of PDE systems to design an optimal (centralized) density filter. It turns out that the covariance operator of observation noise depends on the unknown density. Hence, we use approximations for the covariance operator to obtain a suboptimal density filter, and prove that both the density estimates and their gradient are convergent and remain close to the optimal one using the notion of input-to-state stability (ISS). In the second part, we continue to study how to decentralize the density filter such that each agent can estimate the mean-field density based on only its own position and local information exchange with neighbors. We prove that the local density filter is also convergent and remains close to the centralized one in the sense of ISS. Simulation results suggest that the (centralized) suboptimal density filter is able to generate convergent density estimates, and the local density filter is able to converge and remain close to the centralized filter.
This work studies distributed (probability) density estimation of large-scale systems. Such problems are motivated by many density-based distributed control tasks in which the real-time density of the swarm is used as feedback information, such as sensor deployment and city traffic scheduling. This work is built upon our previous work [1] which presented a (centralized) density filter to estimate the dynamic density of large-scale systems through a novel integration of mean-field models, kernel density estimation (KDE), and infinite-dimensional Kalman filters. In this work, we further study how to decentralize the density filter such that each agent can estimate the global density only based on its local observation and communication with neighbors. This is achieved by noting that the global observation constructed by KDE is an average of the local kernels. Hence, dynamic average consensus algorithms are used for each agent to track the global observation in a distributed way. We present a distributed density filter which requires very little information exchange, and study its stability and optimality using the notion of input-to-state stability. Simulation results suggest that the distributed filter is able to converge to the centralized filter and remain close to it.
Large-scale agent systems have foreseeable applications in the near future. Estimating their macroscopic density is critical for many density-based optimization and control tasks, such as sensor deployment and city traffic scheduling. In this paper, we study the problem of estimating their dynamically varying probability density, given the agents individual dynamics (which can be nonlinear and time-varying) and their states observed in real-time. The density evolution is shown to satisfy a linear partial differential equation uniquely determined by the agents dynamics. We present a density filter which takes advantage of the system dynamics to gradually improve its estimation and is scalable to the agents population. Specifically, we use kernel density estimators (KDE) to construct a noisy measurement and show that, when the agents population is large, the measurement noise is approximately ``Gaussian. With this important property, infinite-dimensional Kalman filters are used to design density filters. It turns out that the covariance of measurement noise depends on the true density. This state-dependence makes it necessary to approximate the covariance in the associated operator Riccati equation, rendering the density filter suboptimal. The notion of input-to-state stability is used to prove that the performance of the suboptimal density filter remains close to the optimal one. Simulation results suggest that the proposed density filter is able to quickly recognize the underlying modes of the unknown density and automatically ignore outliers, and is robust to different choices of kernel bandwidth of KDE.
This work studies the problem of controlling the probability density of large-scale stochastic systems, which has applications in various fields such as swarm robotics. Recently, there is a growing amount of literature that employs partial differential equations (PDEs) to model the density evolution and uses density feedback to design control laws which, by acting on individual systems, stabilize their density towards to a target profile. In spite of its stability property and computational efficiency, the success of density feedback relies on assuming the systems to be homogeneous first-order integrators (plus white noise) and ignores higher-order dynamics, making it less applicable in practice. In this work, we present a backstepping design algorithm that extends density control to heterogeneous and higher-order stochastic systems in strict-feedback forms. We show that the strict-feedback form in the individual level corresponds to, in the collective level, a PDE (of densities) distributedly driven by a collection of heterogeneous stochastic systems. The presented backstepping design then starts with a density feedback design for the PDE, followed by a sequence of stabilizing design for the remaining stochastic systems. We present a candidate control law with stability proof and apply it to nonholonomic mobile robots. A simulation is included to verify the effectiveness of the algorithm.
Swarm robotic systems have foreseeable applications in the near future. Recently, there has been an increasing amount of literature that employs mean-field partial differential equations (PDEs) to model the time-evolution of the probability density of swarm robotic systems and uses mean-field feedback to design stable control laws that act on individuals such that their density converges to a target profile. However, it remains largely unexplored considering problems of how to estimate the mean-field density, how the density estimation algorithms affect the control performance, and whether the estimation performance in turn depends on the control algorithms. In this work, we focus on studying the interplay of these algorithms. Specially, we propose new mean-field control laws which use the real-time density and its gradient as feedback, and prove that they are globally input-to-state stable (ISS) to estimation errors. Then, we design filtering algorithms to obtain estimates of the density and its gradient, and prove that these estimates are convergent assuming the control laws are known. Finally, we show that the feedback interconnection of these estimation and control algorithms is still globally ISS, which is attributed to the bilinearity of the mean-field PDE system. An agent-based simulation is included to verify the stability of these algorithms and their feedback interconnection.
Recent years have seen an increased interest in using mean-field density based modelling and control strategy for deploying robotic swarms. In this paper, we study how to dynamically deploy the robots subject to their physical constraints to efficiently measure and reconstruct certain unknown spatial field (e.g. the air pollution index over a city). Specifically, the evolution of the robots density is modelled by mean-field partial differential equations (PDEs) which are uniquely determined by the robots individual dynamics. Bayesian regression models are used to obtain predictions and return a variance function that represents the confidence of the prediction. We formulate a PDE constrained optimization problem based on this variance function to dynamically generate a reference density signal which guides the robots to uncertain areas to collect new data, and design mean-field feedback-based control laws such that the robots density converges to this reference signal. We also show that the proposed feedback law is robust to density estimation errors in the sense of input-to-state stability. Simulations are included to verify the effectiveness of the algorithms.