No Arabic abstract
The safety of Automated Vehicles (AV) as Cyber-Physical Systems (CPS) depends on the safety of their consisting modules (software and hardware) and their rigorous integration. Deep Learning is one of the dominant techniques used for perception, prediction, and decision making in AVs. The accuracy of predictions and decision-making is highly dependant on the tests used for training their underlying deep-learning. In this work, we propose a method for screening and classifying simulation-based driving test data to be used for training and testing controllers. Our method is based on monitoring and falsification techniques, which lead to a systematic automated procedure for generating and selecting qualified test data. We used Responsibility Sensitive Safety (RSS) rules as our qualifier specifications to filter out the random tests that do not satisfy the RSS assumptions. Therefore, the remaining tests cover driving scenarios that the controlled vehicle does not respond safely to its environment. Our framework is distributed with the publicly available S-TALIRO and Sim-ATAV tools.
This paper focuses on learning a model of system dynamics online while satisfying safety constraints.Our motivation is to avoid offline system identification or hand-specified dynamics models and allowa system to safely and autonomously estimate and adapt its own model during online operation.Given streaming observations of the system state, we use Bayesian learning to obtain a distributionover the system dynamics. In turn, the distribution is used to optimize the system behavior andensure safety with high probability, by specifying a chance constraint over a control barrier function.
As safety-critical autonomous vehicles (AVs) will soon become pervasive in our society, a number of safety concepts for trusted AV deployment have been recently proposed throughout industry and academia. Yet, agreeing upon an appropriate safety concept is still an elusive task. In this paper, we advocate for the use of Hamilton Jacobi (HJ) reachability as a unifying mathematical framework for comparing existing safety concepts, and propose ways to expand its modeling premises in a data-driven fashion. Specifically, we show that (i) existing predominant safety concepts can be embedded in the HJ reachability framework, thereby enabling a common language for comparing and contrasting modeling assumptions, and (ii) HJ reachability can serve as an inductive bias to effectively reason, in a data-driven context, about two critical, yet often overlooked aspects of safety: responsibility and context-dependency.
The testing of Deep Neural Networks (DNNs) has become increasingly important as DNNs are widely adopted by safety critical systems. While many test adequacy criteria have been suggested, automated test input generation for many types of DNNs remains a challenge because the raw input space is too large to randomly sample or to navigate and search for plausible inputs. Consequently, current testing techniques for DNNs depend on small local perturbations to existing inputs, based on the metamorphic testing principle. We propose new ways to search not over the entire image space, but rather over a plausible input space that resembles the true training distribution. This space is constructed using Variational Autoencoders (VAEs), and navigated through their latent vector space. We show that this space helps efficiently produce test inputs that can reveal information about the robustness of DNNs when dealing with realistic tests, opening the field to meaningful exploration through the space of highly structured images.
When autonomous robots interact with humans, such as during autonomous driving, explicit safety guarantees are crucial in order to avoid potentially life-threatening accidents. Many data-driven methods have explored learning probabilistic bounds over human agents trajectories (i.e. confidence tubes that contain trajectories with probability $delta$), which can then be used to guarantee safety with probability $1-delta$. However, almost all existing works consider $delta geq 0.001$. The purpose of this paper is to argue that (1) in safety-critical applications, it is necessary to provide safety guarantees with $delta < 10^{-8}$, and (2) current learning-based methods are ill-equipped to compute accurate confidence bounds at such low $delta$. Using human driving data (from the highD dataset), as well as synthetically generated data, we show that current uncertainty models use inaccurate distributional assumptions to describe human behavior and/or require infeasible amounts of data to accurately learn confidence bounds for $delta leq 10^{-8}$. These two issues result in unreliable confidence bounds, which can have dangerous implications if deployed on safety-critical systems.
Safety and tracking stability are crucial for safety-critical systems such as self-driving cars, autonomous mobile robots, industrial manipulators. To efficiently control safety-critical systems to ensure their safety and achieve tracking stability, accurate system dynamic models are usually required. However, accurate system models are not always available in practice. In this paper, a learning-based safety-stability-driven control (LBSC) algorithm is presented to guarantee the safety and tracking stability for nonlinear safety-critical systems subject to control input constraints under model uncertainties. Gaussian Processes (GPs) are employed to learn the model error between the nominal model and the actual system dynamics, and the estimated mean and variance of the model error are used to quantify a high-confidence uncertainty bound. Using this estimated uncertainty bound, a safety barrier constraint is devised to ensure safety, and a stability constraint is developed to achieve rapid and accurate tracking. Then the proposed LBSC method is formulated as a quadratic program incorporating the safety barrier, the stability constraint, and the control constraints. The effectiveness of the LBSC method is illustrated on the safety-critical connected cruise control (CCC) system simulator under model uncertainties.