Do you want to publish a course? Click here

Energy-Efficient Control Adaptation with Safety Guarantees for Learning-Enabled Cyber-Physical Systems

163   0   0.0 ( 0 )
 Added by Yixuan Wang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Neural networks have been increasingly applied for control in learning-enabled cyber-physical systems (LE-CPSs) and demonstrated great promises in improving system performance and efficiency, as well as reducing the need for complex physical models. However, the lack of safety guarantees for such neural network based controllers has significantly impeded their adoption in safety-critical CPSs. In this work, we propose a controller adaptation approach that automatically switches among multiple controllers, including neural network controllers, to guarantee system safety and improve energy efficiency. Our approach includes two key components based on formal methods and machine learning. First, we approximate each controller with a Bernstein-polynomial based hybrid system model under bounded disturbance, and compute a safe invariant set for each controller based on its corresponding hybrid system. Intuitively, the invariant set of a controller defines the state space where the system can always remain safe under its control. The union of the controllers invariants sets then define a safe adaptation space that is larger than (or equal to) that of each controller. Second, we develop a deep reinforcement learning method to learn a controller switching strategy for reducing the control/actuation energy cost, while with the help of a safety guard rule, ensuring that the system stays within the safe space. Experiments on a linear adaptive cruise control system and a non-linear Van der Pols oscillator demonstrate the effectiveness of our approach on energy saving and safety enhancement.



rate research

Read More

Control schemes for autonomous systems are often designed in a way that anticipates the worst case in any situation. At runtime, however, there could exist opportunities to leverage the characteristics of specific environment and operation context for more efficient control. In this work, we develop an online intermittent-control framework that combines formal verification with model-based optimization and deep reinforcement learning to opportunistically skip certain control computation and actuation to save actuation energy and computational resources without compromising system safety. Experiments on an adaptive cruise control system demonstrate that our approach can achieve significant energy and computation savings.
79 - He Yin , Peter Seiler , Ming Jin 2020
A method is presented to learn neural network (NN) controllers with stability and safety guarantees through imitation learning (IL). Convex stability and safety conditions are derived for linear time-invariant plant dynamics with NN controllers by merging Lyapunov theory with local quadratic constraints to bound the nonlinear activation functions in the NN. These conditions are incorporated in the IL process, which minimizes the IL loss, and maximizes the volume of the region of attraction associated with the NN controller simultaneously. An alternating direction method of multipliers based algorithm is proposed to solve the IL problem. The method is illustrated on an inverted pendulum system, aircraft longitudinal dynamics, and vehicle lateral dynamics.
We introduce a novel learning-based approach to synthesize safe and robust controllers for autonomous Cyber-Physical Systems and, at the same time, to generate challenging tests. This procedure combines formal methods for model verification with Generative Adversarial Networks. The method learns two Neural Networks: the first one aims at generating troubling scenarios for the controller, while the second one aims at enforcing the safety constraints. We test the proposed method on a variety of case studies.
Assuring the correct behavior of cyber-physical systems requires significant modeling effort, particularly during early stages of the engineering and design process when a system is not yet available for testing or verification of proper behavior. A primary motivation for `getting things right in these early design stages is that altering the design is significantly less costly and more effective than when hardware and software have already been developed. Engineering cyber-physical systems requires the construction of several different types of models, each representing a different view, which include stakeholder requirements, system behavior, and the system architecture. Furthermore, each of these models can be represented at different levels of abstraction. Formal reasoning has improved the precision and expanded the available types of analysis in assuring correctness of requirements, behaviors, and architectures. However, each is usually modeled in distinct formalisms and corresponding tools. Currently, this disparity means that a system designer must manually check that the different models are in agreement. Manually editing and checking models is error prone, time consuming, and sensitive to any changes in the design of the models themselves. Wiring diagrams and related theory provide a means for formally organizing these different but related modeling views, resulting in a compositional modeling language for cyber-physical systems. Such a categorical language can make concrete the relationship between different model views, thereby managing complexity, allowing hierarchical decomposition of system models, and formally proving consistency between models.
Control barrier functions have shown great success in addressing control problems with safety guarantees. These methods usually find the next safe control input by solving an online quadratic programming problem. However, model uncertainty is a big challenge in synthesizing controllers. This may lead to the generation of unsafe control actions, resulting in severe consequences. In this paper, we develop a learning framework to deal with system uncertainty. Our method mainly focuses on learning the dynamics of the control barrier function, especially for high relative degree with respect to a system. We show that for each order, the time derivative of the control barrier function can be separated into the time derivative of the nominal control barrier function and a remainder. This implies that we can use a neural network to learn the remainder so that we can approximate the dynamics of the real control barrier function. We show by simulation that our method can generate safe trajectories under parametric uncertainty using a differential drive robot model.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا