ترغب بنشر مسار تعليمي؟ اضغط هنا

Safe CPS from Unsafe Controllers

164   0   0.0 ( 0 )
 نشر من قبل Usama Mehmood
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we explore using runtime verification to design safe cyber-physical systems (CPS). We build upon the Simplex Architecture, where control authority may switch from an unverified and potentially unsafe advanced controller to a backup baseline controller in order to maintain system safety. New to our approach, we remove the requirement that the baseline controller is statically verified. This is important as there are many types of powerful control techniques -- model-predictive control, rapidly-exploring random trees and neural network controllers -- that often work well in practice, but are difficult to statically prove correct, and therefore could not be used before as baseline controllers. We prove that, through more extensive runtime checks, such an approach can still guarantee safety. We call this approach the Black-Box Simplex Architecture, as both high-level controllers are treated as black boxes. We present case studies where model-predictive control provides safety for multi-robot coordination, and neural networks provably prevent collisions for groups of F-16 aircraft, despite occasionally outputting unsafe actions.



قيم البحث

اقرأ أيضاً

We introduce a novel learning-based approach to synthesize safe and robust controllers for autonomous Cyber-Physical Systems and, at the same time, to generate challenging tests. This procedure combines formal methods for model verification with Gene rative Adversarial Networks. The method learns two Neural Networks: the first one aims at generating troubling scenarios for the controller, while the second one aims at enforcing the safety constraints. We test the proposed method on a variety of case studies.
Dynamical systems comprised of autonomous agents arise in many relevant problems such as multi-agent robotics, smart grids, or smart cities. Controlling these systems is of paramount importance to guarantee a successful deployment. Optimal centralize d controllers are readily available but face limitations in terms of scalability and practical implementation. Optimal decentralized controllers, on the other hand, are difficult to find. In this paper, we propose a framework using graph neural networks (GNNs) to learn decentralized controllers from data. While GNNs are naturally distributed architectures, making them perfectly suited for the task, we adapt them to handle delayed communications as well. Furthermore, they are equivariant and stable, leading to good scalability and transferability properties. The problem of flocking is explored to illustrate the potential of GNNs in learning decentralized controllers.
We address the issue of safe optimal path planning under parametric uncertainties using a novel regularizer that allows trading off optimality with safety. The proposed regularizer leverages the notion that collisions may be modeled as constraint vio lations in an optimal control setting in order to produce open-loop trajectories with reduced risk of collisions. The risk of constraint violation is evaluated using a state-dependent relevance function and first-order variations in the constraint function with respect to parametric variations. The approach is generic and can be adapted to any optimal control formulation that deals with constraints under parametric uncertainty. Simulations using a holonomic robot avoiding multiple dynamic obstacles with uncertain velocities are used to demonstrate the effectiveness of the proposed approach. Finally, we introduce the car vs. train problem to emphasize the dependence of the resultant risk aversion behavior on the form of the constraint function used to derive the regularizer.
High performance but unverified controllers, e.g., artificial intelligence-based (a.k.a. AI-based) controllers, are widely employed in cyber-physical systems (CPSs) to accomplish complex control missions. However, guaranteeing the safety and reliabil ity of CPSs with this kind of controllers is currently very challenging, which is of vital importance in many real-life safety-critical applications. To cope with this difficulty, we propose in this work a Safe-visor architecture for sandboxing unverified controllers in CPSs operating in noisy environments (a.k.a. stochastic CPSs). The proposed architecture contains a history-based supervisor, which checks inputs from the unverified controller and makes a compromise between functionality and safety of the system, and a safety advisor that provides fallback when the unverified controller endangers the safety of the system. Both the history-based supervisor and the safety advisor are designed based on an approximate probabilistic relation between the original system and its finite abstraction. By employing this architecture, we provide formal probabilistic guarantees on preserving the safety specifications expressed by accepting languages of deterministic finite automata (DFA). Meanwhile, the unverified controllers can still be employed in the control loop even though they are not reliable. We demonstrate the effectiveness of our proposed results by applying them to two (physical) case studies.
For safely applying reinforcement learning algorithms on high-dimensional nonlinear dynamical systems, a simplified system model is used to formulate a safe reinforcement learning framework. Based on the simplified system model, a low-dimensional rep resentation of the safe region is identified and is used to provide safety estimates for learning algorithms. However, finding a satisfying simplified system model for complex dynamical systems usually requires a considerable amount of effort. To overcome this limitation, we propose in this work a general data-driven approach that is able to efficiently learn a low-dimensional representation of the safe region. Through an online adaptation method, the low-dimensional representation is updated by using the feedback data such that more accurate safety estimates are obtained. The performance of the proposed approach for identifying the low-dimensional representation of the safe region is demonstrated with a quadcopter example. The results show that, compared to previous work, a more reliable and representative low-dimensional representation of the safe region is derived, which then extends the applicability of the safe reinforcement learning framework.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا