ترغب بنشر مسار تعليمي؟ اضغط هنا

Measurement-Robust Control Barrier Functions: Certainty in Safety with Uncertainty in State

320   0   0.0 ( 0 )
 نشر من قبل Ryan Cosner
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

The increasing complexity of modern robotic systems and the environments they operate in necessitates the formal consideration of safety in the presence of imperfect measurements. In this paper we propose a rigorous framework for safety-critical control of systems with erroneous state estimates. We develop this framework by leveraging Control Barrier Functions (CBFs) and unifying the method of Backup Sets for synthesizing control invariant sets with robustness requirements -- the end result is the synthesis of Measurement-Robust Control Barrier Functions (MR-CBFs). This provides theoretical guarantees on safe behavior in the presence of imperfect measurements and improved robustness over standard CBF approaches. We demonstrate the efficacy of this framework both in simulation and experimentally on a Segway platform using an onboard stereo-vision camera for state estimation.

قيم البحث

اقرأ أيضاً

Modern nonlinear control theory seeks to develop feedback controllers that endow systems with properties such as safety and stability. The guarantees ensured by these controllers often rely on accurate estimates of the system state for determining co ntrol actions. In practice, measurement model uncertainty can lead to error in state estimates that degrades these guarantees. In this paper, we seek to unify techniques from control theory and machine learning to synthesize controllers that achieve safety in the presence of measurement model uncertainty. We define the notion of a Measurement-Robust Control Barrier Function (MR-CBF) as a tool for determining safe control inputs when facing measurement model uncertainty. Furthermore, MR-CBFs are used to inform sampling methodologies for learning-based perception systems and quantify tolerable error in the resulting learned models. We demonstrate the efficacy of MR-CBFs in achieving safety with measurement model uncertainty on a simulated Segway system.
Control barrier functions have shown great success in addressing control problems with safety guarantees. These methods usually find the next safe control input by solving an online quadratic programming problem. However, model uncertainty is a big c hallenge in synthesizing controllers. This may lead to the generation of unsafe control actions, resulting in severe consequences. In this paper, we develop a learning framework to deal with system uncertainty. Our method mainly focuses on learning the dynamics of the control barrier function, especially for high relative degree with respect to a system. We show that for each order, the time derivative of the control barrier function can be separated into the time derivative of the nominal control barrier function and a remainder. This implies that we can use a neural network to learn the remainder so that we can approximate the dynamics of the real control barrier function. We show by simulation that our method can generate safe trajectories under parametric uncertainty using a differential drive robot model.
In this paper, we propose a notion of high-order (zeroing) barrier functions that generalizes the concept of zeroing barrier functions and guarantees set forward invariance by checking their higher order derivatives. The proposed formulation guarante es asymptotic stability of the forward invariant set, which is highly favorable for robustness with respect to model perturbations. No forward completeness assumption is needed in our setting in contrast to existing high order barrier function methods. For the case of controlled dynamical systems, we relax the requirement of uniform relative degree and propose a singularity-free control scheme that yields a locally Lipschitz control signal and guarantees safety. Furthermore, the proposed formulation accounts for performance-critical control: it guarantees that a subset of the forward invariant set will admit any existing, bounded control law, while still ensuring forward invariance of the set. Finally, a non-trivial case study with rigid-body attitude dynamics and interconnected cell regions as the safe region is investigated.
Recent advances in Deep Machine Learning have shown promise in solving complex perception and control loops via methods such as reinforcement and imitation learning. However, guaranteeing safety for such learned deep policies has been a challenge due to issues such as partial observability and difficulties in characterizing the behavior of the neural networks. While a lot of emphasis in safe learning has been placed during training, it is non-trivial to guarantee safety at deployment or test time. This paper extends how under mild assumptions, Safety Barrier Certificates can be used to guarantee safety with deep control policies despite uncertainty arising due to perception and other latent variables. Specifically for scenarios where the dynamics are smooth and uncertainty has a finite support, the proposed framework wraps around an existing deep control policy and generates safe actions by dynamically evaluating and modifying the policy from the embedded network. Our framework utilizes control barrier functions to create spaces of control actions that are safe under uncertainty, and when the original actions are found to be in violation of the safety constraint, uses quadratic programming to minimally modify the original actions to ensure they lie in the safe set. Representations of the environment are built through Euclidean signed distance fields that are then used to infer the safety of actions and to guarantee forward invariance. We implement this method in simulation in a drone-racing environment and show that our method results in safer actions compared to a baseline that only relies on imitation learning to generate control actions.
178 - Zexiang Liu , Necmiye Ozay 2019
This paper considers the problem of safety controller synthesis for systems equipped with sensor modalities that can provide preview information. We consider switched systems where switching mode is an external signal for which preview information is available. In particular, it is assumed that the sensors can notify the controller about an upcoming mode switch before the switch occurs. We propose preview automaton, a mathematical construct that captures both the preview information and the possible constraints on switching signals. Then, we study safety control synthesis problem with preview information. An algorithm that computes the maximal invariant set in a given mode-dependent safe set is developed. These ideas are demonstrated on two case studies from autonomous driving domain.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا