No Arabic abstract
With the rapid development of AI and robotics, transporting a large swarm of networked robots has foreseeable applications in the near future. Existing research in swarm robotics has mainly followed a bottom-up philosophy with predefined local coordination and control rules. However, it is arduous to verify the global requirements and analyze their performance. This motivates us to pursue a top-down approach, and develop a provable control strategy for deploying a robotic swarm to achieve a desired global configuration. Specifically, we use mean-field partial differential equations (PDEs) to model the swarm and control its mean-field density (i.e., probability density) over a bounded spatial domain using mean-field feedback. The presented control law uses density estimates as feedback signals and generates corresponding velocity fields that, by acting locally on individual robots, guide their global distribution to a target profile. The design of the velocity field is therefore centralized, but the implementation of the controller can be fully distributed -- individual robots sense the velocity field and derive their own velocity control signals accordingly. The key contribution lies in applying the concept of input-to-state stability (ISS) to show that the perturbed closed-loop system (a nonlinear and time-varying PDE) is locally ISS with respect to density estimation errors. The effectiveness of the proposed control laws is verified using agent-based simulations.
Recent years have seen an increased interest in using mean-field density based modelling and control strategy for deploying robotic swarms. In this paper, we study how to dynamically deploy the robots subject to their physical constraints to efficiently measure and reconstruct certain unknown spatial field (e.g. the air pollution index over a city). Specifically, the evolution of the robots density is modelled by mean-field partial differential equations (PDEs) which are uniquely determined by the robots individual dynamics. Bayesian regression models are used to obtain predictions and return a variance function that represents the confidence of the prediction. We formulate a PDE constrained optimization problem based on this variance function to dynamically generate a reference density signal which guides the robots to uncertain areas to collect new data, and design mean-field feedback-based control laws such that the robots density converges to this reference signal. We also show that the proposed feedback law is robust to density estimation errors in the sense of input-to-state stability. Simulations are included to verify the effectiveness of the algorithms.
Swarm robotic systems have foreseeable applications in the near future. Recently, there has been an increasing amount of literature that employs mean-field partial differential equations (PDEs) to model the time-evolution of the probability density of swarm robotic systems and uses mean-field feedback to design stable control laws that act on individuals such that their density converges to a target profile. However, it remains largely unexplored considering problems of how to estimate the mean-field density, how the density estimation algorithms affect the control performance, and whether the estimation performance in turn depends on the control algorithms. In this work, we focus on studying the interplay of these algorithms. Specially, we propose new mean-field control laws which use the real-time density and its gradient as feedback, and prove that they are globally input-to-state stable (ISS) to estimation errors. Then, we design filtering algorithms to obtain estimates of the density and its gradient, and prove that these estimates are convergent assuming the control laws are known. Finally, we show that the feedback interconnection of these estimation and control algorithms is still globally ISS, which is attributed to the bilinearity of the mean-field PDE system. An agent-based simulation is included to verify the stability of these algorithms and their feedback interconnection.
Symbolic control is a an abstraction-based controller synthesis approach that provides, algorithmically, certifiable-by-construction controllers for cyber-physical systems. Current methodologies of symbolic control usually assume that full-state information is available. This is not suitable for many real-world applications with partially-observable states or output information. This article introduces a framework for output-feedback symbolic control. We propose relations between original systems and their symbolic models based on outputs. They enable designing symbolic controllers and refining them to enforce complex requirements on original systems. To demonstrate the effectiveness of the proposed framework, we provide three different methodologies. They are applicable to a wide range of linear and nonlinear systems, and support general logic specifications.
The security of mobile robotic networks (MRNs) has been an active research topic in recent years. This paper demonstrates that the observable interaction process of MRNs under formation control will present increasingly severe threats. Specifically, we find that an external attack robot, who has only partial observation over MRNs while not knowing the system dynamics or access, can learn the interaction rules from observations and utilize them to replace a target robot, destroying the cooperation performance of MRNs. We call this novel attack as sneak, which endows the attacker with the intelligence of learning knowledge and is hard to be tackled by traditional defense techniques. The key insight is to separately reveal the internal interaction structure within robots and the external interaction mechanism with the environment, from the coupled state evolution influenced by the model-unknown rules and unobservable part of the MRN. To address this issue, we first provide general interaction process modeling and prove the learnability of the interaction rules. Then, with the learned rules, we design an Evaluate-Cut-Restore (ECR) attack strategy considering the partial interaction structure and geometric pattern. We also establish the sufficient conditions for a successful sneak with maximum control impacts over the MRN. Extensive simulations illustrate the feasibility and effectiveness of the proposed attack.
Distributed linear control design is crucial for large-scale cyber-physical systems. It is generally desirable to both impose information exchange (communication) constraints on the distributed controller, and to limit the propagation of disturbances to a local region without cascading to the global network (localization). Recently proposed System Level Synthesis (SLS) theory provides a framework where such communication and localization requirements can be tractably incorporated in controller design and implementation. In this work, we derive a solution to the localized and distributed H2 state feedback control problem without resorting to Finite Impulse Response (FIR) approximation. Our proposed synthesis algorithm allows a column-wise decomposition of the resulting convex program, and is therefore scalable to arbitrary large-scale networks. We demonstrate superior cost performance and computation time of the proposed procedure over previous methods via numerical simulation.