ﻻ يوجد ملخص باللغة العربية
A barrier certificate can separate the state space of a con- sidered hybrid system (HS) into safe and unsafe parts ac- cording to the safety property to be verified. Therefore this notion has been widely used in the verification of HSs. A stronger condition on barrier certificates means that less expressive barrier certificates can be synthesized. On the other hand, synthesizing more expressive barrier certificates often means high complexity. In [9], Kong et al consid- ered how to relax the condition of barrier certificates while still keeping their convexity so that one can synthesize more expressive barrier certificates efficiently using semi-definite programming (SDP). In this paper, we first discuss how to relax the condition of barrier certificates in a general way, while still keeping their convexity. Particularly, one can then utilize different weaker conditions flexibly to synthesize dif- ferent kinds of barrier certificates with more expressiveness efficiently using SDP. These barriers give more opportuni- ties to verify the considered system. We also show how to combine two functions together to form a combined barrier certificate in order to prove a safety property under consid- eration, whereas neither of them can be used as a barrier certificate separately, even according to any relaxed condi- tion. Another contribution of this paper is that we discuss how to discover certificates from the general relaxed condi- tion by SDP. In particular, we focus on how to avoid the unsoundness because of numeric error caused by SDP with symbolic checking
We provide a novel approach to synthesize controllers for nonlinear continuous dynamical systems with control against safety properties. The controllers are based on neural networks (NNs). To certify the safety property we utilize barrier functions,
A barrier certificate often serves as an inductive invariant that isolates an unsafe region from the reachable set of states, and hence is widely used in proving safety of hybrid systems possibly over the infinite time horizon. We present a novel con
We study the multi-agent safe control problem where agents should avoid collisions to static obstacles and collisions with each other while reaching their goals. Our core idea is to learn the multi-agent control policy jointly with learning the contr
Reactive and safe agent modelings are important for nowadays traffic simulator designs and safe planning applications. In this work, we proposed a reactive agent model which can ensure safety without comprising the original purposes, by learning only
Recent advances in Deep Machine Learning have shown promise in solving complex perception and control loops via methods such as reinforcement and imitation learning. However, guaranteeing safety for such learned deep policies has been a challenge due