ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning Control Barrier Functions with High Relative Degree for Safety-Critical Control

149   0   0.0 ( 0 )
 نشر من قبل Chuanzheng Wang
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Control barrier functions have shown great success in addressing control problems with safety guarantees. These methods usually find the next safe control input by solving an online quadratic programming problem. However, model uncertainty is a big challenge in synthesizing controllers. This may lead to the generation of unsafe control actions, resulting in severe consequences. In this paper, we develop a learning framework to deal with system uncertainty. Our method mainly focuses on learning the dynamics of the control barrier function, especially for high relative degree with respect to a system. We show that for each order, the time derivative of the control barrier function can be separated into the time derivative of the nominal control barrier function and a remainder. This implies that we can use a neural network to learn the remainder so that we can approximate the dynamics of the real control barrier function. We show by simulation that our method can generate safe trajectories under parametric uncertainty using a differential drive robot model.

قيم البحث

اقرأ أيضاً

We introduce High-Relative Degree Stochastic Control Lyapunov functions and Barrier Functions as a means to ensure asymptotic stability of the system and incorporate state dependent high relative degree safety constraints on a non-linear stochastic s ystems. Our proposed formulation also provides a generalisation to the existing literature on control Lyapunov and barrier functions for stochastic systems. The control policies are evaluated using a constrained quadratic program that is based on control Lyapunov and barrier functions. Our proposed control design is validated via simulated experiments on a relative degree 2 system (2 dimensional car navigation) and relative degree 4 system (two-link pendulum with elastic actuator).
In this paper, we propose a notion of high-order (zeroing) barrier functions that generalizes the concept of zeroing barrier functions and guarantees set forward invariance by checking their higher order derivatives. The proposed formulation guarante es asymptotic stability of the forward invariant set, which is highly favorable for robustness with respect to model perturbations. No forward completeness assumption is needed in our setting in contrast to existing high order barrier function methods. For the case of controlled dynamical systems, we relax the requirement of uniform relative degree and propose a singularity-free control scheme that yields a locally Lipschitz control signal and guarantees safety. Furthermore, the proposed formulation accounts for performance-critical control: it guarantees that a subset of the forward invariant set will admit any existing, bounded control law, while still ensuring forward invariance of the set. Finally, a non-trivial case study with rigid-body attitude dynamics and interconnected cell regions as the safe region is investigated.
The increasing complexity of modern robotic systems and the environments they operate in necessitates the formal consideration of safety in the presence of imperfect measurements. In this paper we propose a rigorous framework for safety-critical cont rol of systems with erroneous state estimates. We develop this framework by leveraging Control Barrier Functions (CBFs) and unifying the method of Backup Sets for synthesizing control invariant sets with robustness requirements -- the end result is the synthesis of Measurement-Robust Control Barrier Functions (MR-CBFs). This provides theoretical guarantees on safe behavior in the presence of imperfect measurements and improved robustness over standard CBF approaches. We demonstrate the efficacy of this framework both in simulation and experimentally on a Segway platform using an onboard stereo-vision camera for state estimation.
Reinforcement Learning (RL) algorithms have found limited success beyond simulated applications, and one main reason is the absence of safety guarantees during the learning process. Real world systems would realistically fail or break before an optim al controller can be learned. To address this issue, we propose a controller architecture that combines (1) a model-free RL-based controller with (2) model-based controllers utilizing control barrier functions (CBFs) and (3) on-line learning of the unknown system dynamics, in order to ensure safety during learning. Our general framework leverages the success of RL algorithms to learn high-performance controllers, while the CBF-based controllers both guarantee safety and guide the learning process by constraining the set of explorable polices. We utilize Gaussian Processes (GPs) to model the system dynamics and its uncertainties. Our novel controller synthesis algorithm, RL-CBF, guarantees safety with high probability during the learning process, regardless of the RL algorithm used, and demonstrates greater policy exploration efficiency. We test our algorithm on (1) control of an inverted pendulum and (2) autonomous car-following with wireless vehicle-to-vehicle communication, and show that our algorithm attains much greater sample efficiency in learning than other state-of-the-art algorithms and maintains safety during the entire learning process.
115 - Ugo Rosolia , Aaron D. Ames 2020
In this paper we present a multi-rate control architecture for safety critical systems. We consider a high level planner and a low level controller which operate at different frequencies. This multi-rate behavior is described by a piecewise nonlinear model which evolves on a continuous and a discrete level. First, we present sufficient conditions which guarantee recursive constraint satisfaction for the closed-loop system. Afterwards, we propose a control design methodology which leverages Control Barrier Functions (CBFs) for low level control and Model Predictive Control (MPC) policies for high level planning. The control barrier function is designed using the full nonlinear dynamical model and the MPC is based on a simplified planning model. When the nonlinear system is control affine and the high level planning model is linear, the control actions are computed by solving convex optimization problems at each level of the hierarchy. Finally, we show the effectiveness of the proposed strategy on a simulation example, where the low level control action is updated at a higher frequency than the high level command.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا