ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantitative Implementation Strategies for Safety Controllers

58   0   0.0 ( 0 )
 نشر من قبل Philipp J. Meyer
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the symbolic controller synthesis approach to enforce safety specifications on perturbed, nonlinear control systems. In general, in each state of the system several control values might be applicable to enforce the safety requirement and in the implementation one has the burden of picking a particular control value out of possibly many. We present a class of implementation strategies to obtain a controller with certain performance guarantees. This class includes two existing implementation strategies from the literature, based on discounted payoff and mean-payoff games. We unify both approaches by using games characterized by a single discount factor determining the implementation. We evaluate different implementations from our class experimentally on two case studies. We show that the choice of the discount factor has a significant influence on the average long-term costs, and the best performance guarantee for the symbolic model does not result in the best implementation. Comparing the optimal choice of the discount factor here with the previously proposed values, the costs differ by a factor of up to 50. Our approach therefore yields a method to choose systematically a good implementation for safety controllers with quantitative objectives.

قيم البحث

اقرأ أيضاً

In this work, the reachable set estimation and safety verification problems for a class of piecewise linear systems equipped with neural network controllers are addressed. The neural network is considered to consist of Rectified Linear Unit (ReLU) ac tivation functions. A layer-by-layer approach is developed for the output reachable set computation of ReLU neural networks. The computation is formulated in the form of a set of manipulations for a union of polytopes. Based on the output reachable set for neural network controllers, the output reachable set for a piecewise linear feedback control system can be estimated iteratively for a given finite-time interval. With the estimated output reachable set, the safety verification for piecewise linear systems with neural network controllers can be performed by checking the existence of intersections of unsafe regions and output reach set. A numerical example is presented to illustrate the effectiveness of our approach.
Neural networks serve as effective controllers in a variety of complex settings due to their ability to represent expressive policies. The complex nature of neural networks, however, makes their output difficult to verify and predict, which limits th eir use in safety-critical applications. While simulations provide insight into the performance of neural network controllers, they are not enough to guarantee that the controller will perform safely in all scenarios. To address this problem, recent work has focused on formal methods to verify properties of neural network outputs. For neural network controllers, we can use a dynamics model to determine the output properties that must hold for the controller to operate safely. In this work, we develop a method to use the results from neural network verification tools to provide probabilistic safety guarantees on a neural network controller. We develop an adaptive verification approach to efficiently generate an overapproximation of the neural network policy. Next, we modify the traditional formulation of Markov decision process (MDP) model checking to provide guarantees on the overapproximated policy given a stochastic dynamics model. Finally, we incorporate techniques in state abstraction to reduce overapproximation error during the model checking process. We show that our method is able to generate meaningful probabilistic safety guarantees for aircraft collision avoidance neural networks that are loosely inspired by Airborne Collision Avoidance System X (ACAS X), a family of collision avoidance systems that formulates the problem as a partially observable Markov decision process (POMDP).
Waveform feature is one of the requirements for the FRIB LLRF controllers. It is desired that the LLRF con-trollers store the internal data (e.g. the amplitude and phase information of forward/reverse/cavity signals) for at least one second of sample d data at the RF feedback control loop rate (around 1.25 MHz). One use case is to freeze the data buffer when an interlock event happens and read out the fast data to diagnose the problem. An-other use case is to monitor a set of signals at a decimated rate (user settable) while the data buffer is still running, like using an oscilloscope. The detailed implementation will be discussed in the paper, including writing data into the DDR memory through the native interface, reading out the data through the bus interface, etc.
We consider the problem of designing a stabilizing and optimal static controller with a pre-specified sparsity pattern. Since this problem is NP-hard in general, it is necessary to resort to approximation approaches. In this paper, we characterize a class of convex restrictions of this problem that are based on designing a separable quadratic Lyapunov function for the closed-loop system. This approach generalizes previous results based on optimizing over diagonal Lyapunov functions, thus allowing for improved feasibility and performance. Moreover, we suggest a simple procedure to compute favourable structures for the Lyapunov function yielding high-performance distributed controllers. Numerical examples validate our results.
We present a sound and automated approach to synthesize safe digital feedback controllers for physical plants represented as linear, time invariant models. Models are given as dynamical equations with inputs, evolving over a continuous state space an d accounting for errors due to the digitalization of signals by the controller. Our approach has two stages, leveraging counterexample guided inductive synthesis (CEGIS) and reachability analysis. CEGIS synthesizes a static feedback controller that stabilizes the system under restrictions given by the safety of the reach space. Safety is verified either via BMC or abstract acceleration; if the verification step fails, we refine the controller by generalizing the counterexample. We synthesize stable and safe controllers for intricate physical plant models from the digital control literature.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا