ترغب بنشر مسار تعليمي؟ اضغط هنا

Adversarial Learning of Robust and Safe Controllers for Cyber-Physical Systems

134   0   0.0 ( 0 )
 نشر من قبل Ginevra Carbone
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce a novel learning-based approach to synthesize safe and robust controllers for autonomous Cyber-Physical Systems and, at the same time, to generate challenging tests. This procedure combines formal methods for model verification with Generative Adversarial Networks. The method learns two Neural Networks: the first one aims at generating troubling scenarios for the controller, while the second one aims at enforcing the safety constraints. We test the proposed method on a variety of case studies.



قيم البحث

اقرأ أيضاً

High performance but unverified controllers, e.g., artificial intelligence-based (a.k.a. AI-based) controllers, are widely employed in cyber-physical systems (CPSs) to accomplish complex control missions. However, guaranteeing the safety and reliabil ity of CPSs with this kind of controllers is currently very challenging, which is of vital importance in many real-life safety-critical applications. To cope with this difficulty, we propose in this work a Safe-visor architecture for sandboxing unverified controllers in CPSs operating in noisy environments (a.k.a. stochastic CPSs). The proposed architecture contains a history-based supervisor, which checks inputs from the unverified controller and makes a compromise between functionality and safety of the system, and a safety advisor that provides fallback when the unverified controller endangers the safety of the system. Both the history-based supervisor and the safety advisor are designed based on an approximate probabilistic relation between the original system and its finite abstraction. By employing this architecture, we provide formal probabilistic guarantees on preserving the safety specifications expressed by accepting languages of deterministic finite automata (DFA). Meanwhile, the unverified controllers can still be employed in the control loop even though they are not reliable. We demonstrate the effectiveness of our proposed results by applying them to two (physical) case studies.
For a class of Cyber-Physical Systems (CPSs), we address the problem of performing computations over the cloud without revealing private information about the structure and operation of the system. We model CPSs as a collection of input-output dynami cal systems (the system operation modes). Depending on the mode the system is operating on, the output trajectory is generated by one of these systems in response to driving inputs. Output measurements and driving inputs are sent to the cloud for processing purposes. We capture this processing through some function (of the input-output trajectory) that we require the cloud to compute accurately - referred here as the trajectory utility. However, for privacy reasons, we would like to keep the mode private, i.e., we do not want the cloud to correctly identify what mode of the CPS produced a given trajectory. To this end, we distort trajectories before transmission and send the corrupted data to the cloud. We provide mathematical tools (based on output-regulation techniques) to properly design distorting mechanisms so that: 1) the original and distorted trajectories lead to the same utility; and the distorted data leads the cloud to misclassify the mode.
Assuring the correct behavior of cyber-physical systems requires significant modeling effort, particularly during early stages of the engineering and design process when a system is not yet available for testing or verification of proper behavior. A primary motivation for `getting things right in these early design stages is that altering the design is significantly less costly and more effective than when hardware and software have already been developed. Engineering cyber-physical systems requires the construction of several different types of models, each representing a different view, which include stakeholder requirements, system behavior, and the system architecture. Furthermore, each of these models can be represented at different levels of abstraction. Formal reasoning has improved the precision and expanded the available types of analysis in assuring correctness of requirements, behaviors, and architectures. However, each is usually modeled in distinct formalisms and corresponding tools. Currently, this disparity means that a system designer must manually check that the different models are in agreement. Manually editing and checking models is error prone, time consuming, and sensitive to any changes in the design of the models themselves. Wiring diagrams and related theory provide a means for formally organizing these different but related modeling views, resulting in a compositional modeling language for cyber-physical systems. Such a categorical language can make concrete the relationship between different model views, thereby managing complexity, allowing hierarchical decomposition of system models, and formally proving consistency between models.
Designing resilient control strategies for mitigating stealthy attacks is a crucial task in emerging cyber-physical systems. In the design of anomaly detectors, it is common to assume Gaussian noise models to maintain tractability; however, this assu mption can lead the actual false alarm rate to be significantly higher than expected. We propose a distributionally robust anomaly detector for noise distributions in moment-based ambiguity sets. We design a detection threshold that guarantees that the actual false alarm rate is upper bounded by the desired one by using generalized Chebyshev inequalities. Furthermore, we highlight an important trade-off between the worst-case false alarm rate and the potential impact of a stealthy attacker by efficiently computing an outer ellipsoidal bound for the attack-reachable states corresponding to the distributionally robust detector threshold. We illustrate this trade-off with a numerical example and compare the proposed approach with a traditional chi-squared detector.
162 - Yixuan Wang , Chao Huang , Qi Zhu 2020
Neural networks have been increasingly applied for control in learning-enabled cyber-physical systems (LE-CPSs) and demonstrated great promises in improving system performance and efficiency, as well as reducing the need for complex physical models. However, the lack of safety guarantees for such neural network based controllers has significantly impeded their adoption in safety-critical CPSs. In this work, we propose a controller adaptation approach that automatically switches among multiple controllers, including neural network controllers, to guarantee system safety and improve energy efficiency. Our approach includes two key components based on formal methods and machine learning. First, we approximate each controller with a Bernstein-polynomial based hybrid system model under bounded disturbance, and compute a safe invariant set for each controller based on its corresponding hybrid system. Intuitively, the invariant set of a controller defines the state space where the system can always remain safe under its control. The union of the controllers invariants sets then define a safe adaptation space that is larger than (or equal to) that of each controller. Second, we develop a deep reinforcement learning method to learn a controller switching strategy for reducing the control/actuation energy cost, while with the help of a safety guard rule, ensuring that the system stays within the safe space. Experiments on a linear adaptive cruise control system and a non-linear Van der Pols oscillator demonstrate the effectiveness of our approach on energy saving and safety enhancement.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا